Updates from: 10/03/2023 01:11:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policies Series Sign Up Or Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in.md
Previously updated : 01/30/2023 Last updated : 10/03/2023
When the custom policy runs:
- **Orchestration Step 3** - This step runs if the user signs up (`objectId` doesn't exist), and that a user doesn't select a company `accountType`. So we've to ask the user to input an `accessCode` by invoking the *AccessCodeInputCollector* self-asserted technical profile. - **Orchestration Step 4** - This step runs if the user signs up (objectId doesn't exist), so we display the sign-up form by invoking the
-*UserInformationCollector* self-asserted technical profile. This step runs whether a user signs up or signs in.
+*UserInformationCollector* self-asserted technical profile.
- **Orchestration Step 5** - This step reads account information from Microsoft Entra ID (we invoke `AAD-UserRead` Microsoft Entra technical profile), so it runs whether a user signs up or signs in.
You can sign in by entering the **Email Address** and **Password** of an existin
- Learn how to [Remove the sign-up link](add-sign-in-policy.md), so users can just sign in. -- Learn more about [OpenID Connect technical profile](openid-connect-technical-profile.md).
+- Learn more about [OpenID Connect technical profile](openid-connect-technical-profile.md).
active-directory-b2c Openid Connect Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect-technical-profile.md
Previously updated : 08/22/2023 Last updated : 09/12/2023
The technical profile also returns claims that aren't returned by the identity p
| MarkAsFailureOnStatusCode5xx | No | Indicates whether a request to an external service should be marked as a failure if the Http status code is in the 5xx range. The default is `false`. | | DiscoverMetadataByTokenIssuer | No | Indicates whether the OIDC metadata should be discovered by using the issuer in the JWT token.If you need to build the metadata endpoint URL based on Issuer, set this to `true`.| | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
-|token_endpoint_auth_method| No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt`. For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
+|token_endpoint_auth_method| No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic`, `private_key_jwt`. For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
|token_signing_algorithm| No | Specifies the signing algorithm to use when `token_endpoint_auth_method` is set to `private_key_jwt`. Possible values: `RS256` (default) or `RS512`.| | SingleLogoutEnabled | No | Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](./session-behavior.md#sign-out). Possible values: `true` (default), or `false`. | |ReadBodyClaimsOnIdpRedirect| No| Set to `true` to read claims from response body on identity provider redirect. This metadata is used with [Apple ID](identity-provider-apple-id.md), where claims return in the response payload.|
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-user-accounts.md
As you design and operationalize a log monitoring and alerting strategy, conside
| What to monitor | Risk Level | Where | Filter/sub-filter | Notes | | - | - | - | - | - |
-| Leaked credentials user risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Leaked credentials <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Microsoft Entra Threat Intelligence user risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Microsoft Entra threat intelligence <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Anonymous IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Anonymous IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Atypical travel sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Atypical travel <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Anomalous Token| Varies| Microsoft Entra ID Risk Detection logs| UX: Anomalous Token <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Malware linked IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Malware linked IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Suspicious browser sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Suspicious browser <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Unfamiliar sign-in properties sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Unfamiliar sign-in properties <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Malicious IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Malicious IP address<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Suspicious inbox manipulation rules sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Suspicious inbox manipulation rules<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Password Spray sign-in risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Password spray<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Impossible travel sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Impossible travel<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| New country/region sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: New country/region<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Activity from anonymous IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Activity from Anonymous IP address<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Suspicious inbox forwarding sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Suspicious inbox forwarding<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
-| Microsoft Entra threat intelligence sign-in risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Microsoft Entra threat intelligence<br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md) |
+| Leaked credentials user risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Leaked credentials <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Microsoft Entra Threat Intelligence user risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Microsoft Entra threat intelligence <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Anonymous IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Anonymous IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)[Sigma rules]<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Atypical travel sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Atypical travel <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Anomalous Token| Varies| Microsoft Entra ID Risk Detection logs| UX: Anomalous Token <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Malware linked IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Malware linked IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Suspicious browser sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Suspicious browser <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Unfamiliar sign-in properties sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Unfamiliar sign-in properties <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Malicious IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Malicious IP address<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Suspicious inbox manipulation rules sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Suspicious inbox manipulation rules<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Password Spray sign-in risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Password spray<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Impossible travel sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Impossible travel<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| New country/region sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: New country/region<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Activity from anonymous IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Activity from Anonymous IP address<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Suspicious inbox forwarding sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Suspicious inbox forwarding<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Microsoft Entra threat intelligence sign-in risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Microsoft Entra threat intelligence<br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
For more information, visit [What is Identity Protection](../identity-protection/overview-identity-protection.md).
The following are listed in order of importance based on the effect and severity
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Authentications of privileged accounts outside of expected controls.| High| Microsoft Entra sign-in log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma ruless](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| When only single-factor authentication is required.| Low| Microsoft Entra sign-in log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Authentications of privileged accounts outside of expected controls.| High| Microsoft Entra sign-in log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| When only single-factor authentication is required.| Low| Microsoft Entra sign-in log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Discover privileged accounts not registered for MFA.| High| Azure Graph API| Query for IsMFARegistered eq false for administrator accounts. <br>[List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http)| Audit and investigate to determine if intentional or an oversight. | | Successful authentications from countries/regions your organization doesn't operate out of.| Medium| Microsoft Entra sign-in log| Status = success<br>Location = \<unapproved country/region\>| Monitor and alert on any entries not equal to the city names you provide.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Successful authentication, session blocked by Conditional Access.| Medium| Microsoft Entra sign-in log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by Conditional Access| Monitor and investigate when authentication is successful, but session is blocked by Conditional Access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Successful authentication after you have disabled legacy authentication.| Medium| Microsoft Entra sign-in log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentication, session blocked by Conditional Access.| Medium| Microsoft Entra sign-in log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by Conditional Access| Monitor and investigate when authentication is successful, but session is blocked by Conditional Access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentication after you have disabled legacy authentication.| Medium| Microsoft Entra sign-in log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
We recommend you periodically review authentications to medium business impact (MBI) and high business impact (HBI) applications where only single-factor authentication is required. For each, you want to determine if single-factor authentication was expected or not. In addition, review for successful authentication increases or at unexpected times, based on the location. | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - |- |- |- |
-| Authentications to MBI and HBI application using single-factor authentication.| Low| Microsoft Entra sign-in log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Authentications at days and times of the week or year that countries/regions do not conduct normal business operations.| Low| Microsoft Entra sign-in log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries/regions do not conduct normal business operations.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Measurable increase of successful sign ins.| Low| Microsoft Entra sign-in log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Authentications to MBI and HBI application using single-factor authentication.| Low| Microsoft Entra sign-in log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Authentications at days and times of the week or year that countries/regions do not conduct normal business operations.| Low| Microsoft Entra sign-in log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries/regions do not conduct normal business operations.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Measurable increase of successful sign ins.| Low| Microsoft Entra sign-in log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Next steps
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Previously updated : 09/27/2023 Last updated : 10/02/2023
+ms.localizationpriority: high
# Microsoft Entra certificate-based authentication technical deep dive
active-directory Howto Policy Persistent Browser Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-persistent-browser-session.md
Protect user access on unmanaged devices by preventing browser sessions from rem
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Under **Target resources** > **Cloud apps** > **Include**, select **All cloud apps**. 1. Under **Conditions** > **Filter for devices**, set **Configure** to **Yes**.
- 1. Under **Devices matching the rule:**, set to **Include filtered devices in policy**.
+ 1. Under **Devices matching the rule:**, set to **Exclude filtered devices in policy**.
1. Under **Rule syntax** select the **Edit** pencil and paste the following expressing in the box, then select **Apply**. 1. device.trustType -ne "ServerAD" -or device.isCompliant -ne True 1. Select **Done**.
active-directory Quickstart Single Page App Javascript Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-javascript-sign-in.md
To obtain the sample application, you can either clone it from GitHub or downloa
## Configure the project
-1. In your IDE, open the project folder, *ms-identity-javascript-tutorial/angular-spa*, containing the sample.
+1. In your IDE, open the project folder, *ms-identity-javascript-tutorial*, containing the sample.
1. Open *1-Authentication/1-sign-in/App/authConfig.js* and replace the file contents with the following snippet: :::code language="csharp" source="~/ms-identity-docs-code-javascript/js-spa/App/authConfig.js":::
Run the project with a web server by using Node.js:
- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md). -- Learn more by building this JavaScript SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-javascript-spa.md)
+- Learn more by building this JavaScript SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-javascript-spa.md)
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
This article explains how the local administrators membership update works and h
## How it works
-When you connect a Windows device with Microsoft Entra ID using a Microsoft Entra join, Microsoft Entra ID adds the following security principals to the local administrators group on the device:
+At the time of Microsoft Entra join, we add the following security principals to the local administrators group on the device:
- The Microsoft Entra Global Administrator role - The Azure AD Joined Device Local Administrator role - The user performing the Microsoft Entra join
+> [!NOTE]
+> This is done during the join operation only. If an administrator makes changes after this point they will need to update the group membership on the device.
+ By adding Microsoft Entra roles to the local administrators group, you can update the users that can manage a device anytime in Microsoft Entra ID without modifying anything on the device. Microsoft Entra ID also adds the Azure AD Joined Device Local Administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to users with the Global Administrator role, you can also enable users that have been *only* assigned the Azure AD Joined Device Local Administrator role to manage a device. ## Manage the Global Administrator role
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Microsoft Azure operated by 21Vianet:
### Authentication requirements
-[Microsoft Entra Guest accounts](/azure/active-directory/external-identities/what-is-b2b) can't connect to Azure Bastion via Microsoft Entra authentication.
+[Microsoft Entra Guest accounts](/azure/active-directory/external-identities/what-is-b2b) can't connect to Azure VMs or Azure Bastion enabled VMs via Microsoft Entra authentication.
<a name='enable-azure-ad-login-for-a-windows-vm-in-azure'></a>
To use passwordless authentication for your Windows VMs in Azure, you need the W
- Windows Server 2022 with [2022-10 Cumulative Update for Microsoft server operating system (KB5018421)](https://support.microsoft.com/kb/KB5018421) or later installed. > [!IMPORTANT]
-> There is no requirement for Windows client machine to be either Microsoft Entra registered, or Microsoft Entra joined or Microsoft Entra hybrid joined to the *same* directory as the VM. Additionally, to RDP by using Microsoft Entra credentials, users must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login.
+> The Windows client machine is required to be either Microsoft Entra registered, or Microsoft Entra joined or Microsoft Entra hybrid joined to the *same* directory as the VM. Additionally, to RDP by using Microsoft Entra credentials, users must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login. This requirement doesn't exist for [passwordless sign-in](#log-in-using-passwordless-authentication-with-microsoft-entra-id).
To connect to the remote computer:
active-directory Manage Device Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-device-identities.md
From there, you can go to **All devices** to:
[![Screenshot that shows the All devices view.](./media/manage-device-identities/all-devices-azure-portal.png)](./media/manage-device-identities/all-devices-azure-portal.png#lightbox) > [!TIP]
-> - Microsoft Entra hybrid joined Windows 10 or newer devices don't have an owner. If you're looking for a device by owner and don't find it, search by the device ID.
+> - Microsoft Entra hybrid joined Windows 10 or newer devices don't have an owner unless the primary user is set in Microsoft Intune. If you're looking for a device by owner and don't find it, search by the device ID.
> > - If you see a device that's **Microsoft Entra hybrid joined** with a state of **Pending** in the **Registered** column, the device has been synchronized from Microsoft Entra Connect and is waiting to complete registration from the client. See [How to plan your Microsoft Entra hybrid join implementation](hybrid-join-plan.md). For more information, see [Device management frequently asked questions](faq.yml). >
active-directory Troubleshoot Device Dsregcmd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-device-dsregcmd.md
The state is displayed only when the device is Microsoft Entra joined or Microso
* *FAILED. ERROR* if the test was unable to run. This test requires network connectivity to Microsoft Entra ID under the system context. > [!NOTE] > The **DeviceAuthStatus** field was added in the Windows 10 May 2021 update (version 21H1).
+- **Virtual Desktop**: There are three cases where this appears.
+ - NOT SET - VDI device metadata is not present on the device.
+ - YES - VDI device metadata is present and dsregcmd outputs associated metadata including:
+ - Provider: Name of the VDI vendor.
+ - Type: Persistent VDI or non-persistent VDI.
+ - User mode: Single user or multi-user.
+ - Extensions: Number of key value pairs in optional vendor specific metadata, followed by key value pairs.
+ - INVALID - The VDI device metadata is present but not set correctly. In this case, dsregcmd outputs the incorrect metadata.
### Sample device details output
active-directory Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-custom-domain.md
If you can't verify a custom domain name, try the following suggestions:
- **Wait at least an hour and try again.** DNS records must propagate before you can verify the domain. This process can take an hour or more. -- **If you are trying to verify a child domain, verify the parent domain first.** Make sure the parent domain is created and verified first before you try to verify a child domain.- - **Make sure the DNS record is correct.** Go back to the domain name registrar site. Make sure the entry is there, and that it matches the DNS entry information provided in the Microsoft Entra admin center. - If you can't update the record on the registrar site, share the entry with someone who has permissions to add the entry and verify it's correct.
active-directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-storage-eu.md
Last updated 08/17/2023-+ # Customer data storage and processing for European customers in Microsoft Entra ID
-Microsoft Entra ID stores customer data in a geographic location based on how a tenant was created and provisioned. The following list provides information about how the location is defined:
+Microsoft Entra stores customer data in a geographic location based on how a tenant was created and provisioned. The following list provides information about how the location is defined:
-* **Azure portal or Microsoft Entra API** - A customer selects a location from the pre-defined list.
+* **Microsoft Entra admin center or Microsoft Entra API** - A customer selects a location from the pre-defined list.
* **Dynamics 365 and Power Platform** - A customer provisions their tenant in a pre-defined location.
-* **EU Data Residency** - For customers who provided a location in Europe, Microsoft Entra ID stores most of the customer data in Europe, except where noted later in this article.
-* **EU Data Boundary** - For customers who provided a location that is within the EU Data Boundary (members of the EU and EFTA), Microsoft Entra ID stores and processes most of the customer data in the EU Data Boundary, except where noted later in this article.
+* **EU Data Residency** - For customers who provided a location in Europe, Microsoft Entra stores most of the customer data in Europe, except where noted later in this article.
+* **EU Data Boundary** - For customers who provided a location that is within the [EU Data Boundary](/privacy/eudb/eu-data-boundary-learn#eu-data-boundary-countries-and-datacenter-locations) (members of the EU and EFTA), Microsoft Entra stores and processes most of the customer data in the EU Data Boundary, except where noted later in this article.
* **Microsoft 365** - The location is based on a customer provided billing address. The following sections provide information about customer data that doesn't meet the EU Data Residency or EU Data Boundary commitments.
-## Services permanently excluded from the EU Data Residency and EU Data Boundary
+## Services that will temporarily transfer a subset of customer data out of the EU Data Residency and EU Data Boundary
-* **Reason for customer data egress** - Some forms of communication, such as phone calls or text messaging platforms like SMS, RCS, or WhatsApp, rely on a network that is operated by global providers. Device vendor-specific services, such as push notifications from Apple or Google, may be outside of Europe.
-* **Types of customer data being egressed** - User account data (phone number).
-* **Customer data location at rest** - In EU Data Boundary.
-* **Customer data processing** - Some processing may occur globally.
-* **Services** - multifactor Authentication
+For some components of a service, work is in progress to be included in the EU Data Residency and EU Data Boundary, but completion of this work is delayed. The following sections in this article explain the customer data that these services currently transfer out of Europe as part of their service operations.
-## Services temporarily excluded from the EU Data Residency and EU Data Boundary
+**EU Data Residency:**
-Some services have work in progress to be EU Data Residency and EU Data Boundary compliant, but this work is delayed beyond January 1, 2023. The following details explain the customer data that these features currently transfer out of the EU as part of their service operations:
+- **Reason for customer data egress** - A few of the tenants are stored outside of the EU location due one of the following reasons:
-* **Reason for customer data egress** - To provide reliable and scalable service, Microsoft performs regular analytics that involve transfers of data outside the EU location.
-* **Types of customer data being egressed** - User and device account data, usage data, and service configuration (application, policy, and group).
-* **Customer data location at rest** - US
-* **Customer data processing** - US
-* **Services** - Microsoft Entra Connect, Microsoft Entra Connect Health, Device Registration, Directory Core Store, Dynamic Groups Service, Self-Service Group Management
-
-Some services incorrectly stored data out of the EU Data Boundary. The following details explain the customer data that these features currently transfer out of the EU as part of their service operations:
-
-* **Reason for customer data egress** - A small number of tenants created in the EU location prior to March 2019 were incorrectly stored out of the EU Data Boundary due to an issue that is now fixed. Microsoft is in the process of migrating tenants to the correct location.
-* **Types of customer data being egressed** - User and device account data, and service configuration (application, policy, and group).
-* **Customer data location at rest** - US and Asia/Pacific.
+ - The tenants were initially created with a country code that is NOT in Europe and later the tenant country code was changed to the one in Europe. The Microsoft Entra directory data location is decided during the tenant creation time and not changed when the country code for the tenant is updated. Starting March 2019 Microsoft has blocked updating the country code on a tenant to avoid such confusion.
+ - There are 13 country codes (Countries include: Azerbaijan, Bahrain, Israel, Jordan, Kazakhstan, Kuwait, Lebanon, Oman, Pakistan, Qatar, Saudi Arabia, Turkey, UAE) that were mapped to Asia region until 2013 and later mapped to Europe. Tenants that were created before July 2013 from this country code are provisioned in Asia instead of Europe.
+ - There are seven country codes (Countries include: Armenia, Georgia, Iraq, Kyrgyzstan, Tajikistan, Turkmenistan, Uzbekistan) that were mapped to Asia region until 2017 and later mapped to Europe. Tenants that were created before February 2017 from this country code are provisioned in Asia instead of Europe.
+* **Types of customer data being egressed** - User and device account data, and service configuration (application, policy, and group).
+* **Customer data location at rest** - US and Asia/Pacific.
* **Customer data processing** - The same as the location at rest. * **Services** - Directory Core Store
-## Services temporarily excluded from the EU Data Boundary
+**EU Data Boundary:**
+
+See more information on Microsoft Entra temporary partial customer data transfers from the EU Data Boundary [Services that temporarily transfer a subset of customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-temporary-partial-transfers.md#security-services).
+
+## Services that will permanently transfer a subset of customer data out of the EU Data Residency and EU Data Boundary
-Some services have work in progress to be EU Data Boundary compliant. This work is delayed beyond January 1, 2023. The following details explain the customer data that these features currently transfer out of the EU Data Boundary as part of their service operations:
+Some components of a service will continue to transfer a limited amount of customer data out of the EU Data Residency and EU Data Boundary because this transfer is by design to facilitate the function of the services.
-* **Reason for customer data egress** - These features haven't completed changes to fully process user or admin transactions, such as sign-in or object and application configuration actions within the EU Data Boundary.
-* **Types of customer data being egressed** - User and device account data, usage data, and service configuration (application, policy, group, and terms of use).
-* **Customer data location at rest** - In the EU Data Boundary.
-* **Customer data processing** - Some processing may occur globally.
-* **Services** - Microsoft Entra Connect, Microsoft Entra Connect Health, Enterprise Application Management, Dynamic Groups Service, MyAccount, MyApps, MySign-Ins, Reporting and Audit Insights, Self-Service Credentials Management, Self-Service Group Management, Sign-In, Terms of Use
+**EU Data Residency:**
-Some services have email specific data that will become compliant in the coming months. The following details explain the customer data that these features currently transfer out of the EU Data Boundary as part of their service operations:
+[Microsoft Entra ID](/azure/active-directory/fundamentals/whatis): When an IP Address or phone number is determined to be used in fraudulent activities, they are published globally to block access from any workloads using them.
-* **Reason for customer data egress** - To provide email notifications, some data is processed outside of the EU location.
-* **Types of customer data being egressed** - User account data (email address).
-* **Customer data location at rest** - In EU Data Boundary.
-* **Customer data processing**- Some processing may occur globally.
-* **Services** - Azure Active Directory Sync Fabric, Azure Certificate Service, Enterprise App Management, Identity Governance, Azure Customer Lockbox
+**EU Data Boundary:**
+
+See more information on Microsoft Entra permanent partial customer data transfers from the EU Data Boundary [Services that will permanently transfer a subset of customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-permanent-partial-transfers.md#security-services).
## Other considerations ### Optional service capabilities that transfer data out of the EU Data Residency and EU Data Boundary
-Administrators can choose to enable or disable certain Microsoft Entra features. If the following features are enabled and used by the customer, they will result in data transfers out of the EU Data Residency and EU Data Boundary as described:
+**EU Data Residency:**
+
+Some services include capabilities that are optional (in some cases requiring a customer subscription), and where customer administrators can choose to enable or disable these capabilities for their service tenancies. If made available and used by a customer's users, these capabilities will result in data transfers out of Europe as described in the following sections in this article.
+
+- [Mulitenant administration](/azure/active-directory/multi-tenant-organizations/overview): An organization may choose to create a multitenant organization within Microsoft Entra ID. For example, a customer can invite users to their tenant in a B2B context. A customer can create a multitenant SaaS application that allows other third-party tenants to provision the application in the third-party tenant. A customer can make two or more tenants affiliated with one another and act as a single tenant in certain scenarios, such as multitenant organization (MTO) formation, tenant to tenant sync, and shared e-mail domain sharing. Administrator configuration and use of multitenant collaboration may occur with tenants outside of the EU Data Residency and EU Data Boundary resulting in some customer data, such as user and device account data, usage data, and service configuration (application, policy, and group) being stored and processed in the location of the collaborating tenant.
+- [Application Proxy](/azure/active-directory/app-proxy/application-proxy): Application proxy allows customers to access both cloud and on-premises applications through an external URL or an internal application portal. Customers may choose advanced routing configurations that would cause Customer Data to egress outside of the EU Data Residency and EU Data Boundary, including user account data, usage data, and application configuration data.
+
+**EU Data Boundary:**
-* **Microsoft Entra Multi Tenant Collaboration** - With multi tenant collaboration scenarios enabled, customers can configure their tenant to collaborate with users from a different tenant. For example, a customer can invite users to their tenant in a B2B context. A customer can create a multi-tenant SaaS application that allows other third party tenants to provision the application in the third party tenant. Or, the customer can make two or more tenants affiliated with one another and act as a single tenant in certain scenarios, such as multi-tenant organization (MTO) formation, tenant to tenant sync, and shared e-mail domain sharing. Customer configuration and use of multi tenant collaboration may occur with tenants outside of the EU Data Residency and EU Data Boundary resulting in some customer data, such as user and device account data, usage data, and service configuration (application, policy, and group) stored and processed in the location of the collaborating tenant.
-* **Application Proxy** - Allows customers to access their on-premises web applications externally. Customers may choose advanced routing configurations that allow customer data to egress outside of the EU Data Residency and EU Data Boundary, including user account data, usage data, and application configuration data.
-* **Microsoft 365 Multi Geo** - Microsoft 365 Multi-Geo provides customers with the ability to expand their Microsoft 365 presence to multiple geographic countries/regions within a single existing Microsoft 365 tenant. Microsoft Entra ID will egress customer data to perform backup authentication to the locations configured by the customer. Types of customer data include user and device account data, branding data, and service configuration data (application, policy, and group).
+See more information on optional service capabilities that transfer customer data out of the EU Data Boundary [Optional service capabilities that transfer customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-transfers-for-optional-capabilities.md#microsoft-entra-id).
### Other EU Data Boundary online services
-Services and applications that integrate with Microsoft Entra ID have access to customer data. Review how each service and application stores and processes customer data, and verify that they meet your company's data handling requirements.
+Services and applications that integrate with Microsoft Entra have access to customer data. Review how each service and application stores and processes customer data, and verify that they meet your company's data handling requirements.
## Next steps
active-directory How To Create Delete Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-create-delete-users.md
Reviewing the default user permissions may also help you determine the type of u
The required role of least privilege varies based on the type of user you're adding and if you need to assign Microsoft Entra roles at the same time. **Global Administrator** can create users and assign roles, but whenever possible you should use the least privileged role.
-| Role | Task |
+| Task | Role |
| -- | -- | | Create a new user | User Administrator | | Invite an external guest | Guest Inviter |
active-directory Licensing Preview Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/licensing-preview-terms.md
+
+ Title: Microsoft Entra preview program terms
+description: In this article we go over the terms in effect when participating in Microsoft Entra preview programs.
+++++ Last updated : 09/19/2023
+# Customer intent: I am trying to find information on the terms and conditions for Microsoft Entra preview programs.
+++++
+# Microsoft Entra preview program terms
++
active-directory Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/apps.md
Microsoft Entra ID Governance can be integrated with many other applications, us
| [Robin](../../active-directory/saas-apps/robin-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Rollbar](../../active-directory/saas-apps/rollbar-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Rouse Sales](../../active-directory/saas-apps/rouse-sales-provisioning-tutorial.md) | ΓùÅ | |
-| [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md) | ΓùÅ | |
+| [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
| [SafeGuard Cyber](../../active-directory/saas-apps/safeguard-cyber-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Salesforce Sandbox](../../active-directory/saas-apps/salesforce-sandbox-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Samanage](../../active-directory/saas-apps/samanage-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
active-directory Reference Connect Sync Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-sync-functions-reference.md
Returns the position where the substring was found or 0 if not found.
**Example:** `InStr("The quick brown fox","quick")`
-Evalues to 5
+Evaluates to 5
`InStr("repEated","e",3,vbBinaryCompare)` Evaluates to 7
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [Azure FarmBeats - FarmBeatsPythonSdkVersion (Upgrade to the la
### SSL/TLS renegotiation blocked
-SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it is blocked, reading 'context.Request.Certificate' in policy expressions returns 'null'. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
+SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it's blocked, reading 'context.Request.Certificate' in policy expressions returns 'null.' To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients). ### Hostname certificate rotation failed
-API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service cannot retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
+API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service can't retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
Learn more about [Microsoft App Container App - ContainerAppMinimalReplicaCountT
### Renew custom domain certificate
-We detected the custom domain certificate you uploaded is near expiration. Please renew your certificate and upload the new certificate for your container apps.
+We detected the custom domain certificate you uploaded is near expiration. Renew your certificate and upload the new certificate for your container apps.
Learn more about [Microsoft App Container App - ContainerAppCustomDomainCertificateNearExpiration (Renew custom domain certificate)](https://aka.ms/containerappcustomdomaincert).
A potential networking issue has been identified for your Container Apps Environ
Learn more about [Managed Environment - CreateNewContainerAppsEnvironment (A potential networking issue has been identified with your Container Apps Environment that requires it to be re-created to avoid DNS issues)](https://aka.ms/createcontainerapp).
-## Cache for Redis
+### Domain verification required to renew your App Service Certificate
+
+You have an App Service Certificate that's currently in a Pending Issuance status and requires domain verification. Failure to validate domain ownership results in an unsuccessful certificate issuance. Domain verification isn't automated for App Service Certificates and requires your action.
+
+Learn more about [App Service Certificate - ASCDomainVerificationRequired (Domain verification required to renew your App Service Certificate)](https://aka.ms/ASCDomainVerificationRequired).
+
+## Cache
### Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.
Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availabili
### Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate
-We recommend configuring the Azure Front Door customer certificate secret to ΓÇÿLatestΓÇÖ for the AFD to refer to the latest secret version in Azure Key Vault, so that the secret can be automatically rotated.
+We recommend configuring the Azure Front Door (AFD) customer certificate secret to ΓÇÿLatestΓÇÖ for the AFD to refer to the latest secret version in Azure Key Vault, so that the secret can be automatically rotated.
Learn more about [Front Door Profile - SwitchVersionBYOC (Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate)](/azure/frontdoor/standard-premium/how-to-configure-https-custom-domain#certificate-renewal-and-changing-certificate-types).
-## Compute
-### Migrate Virtual Machines to Availability Zones
+### Validate domain ownership by adding DNS TXT record to DNS provider.
+
+Validate domain ownership by adding DNS TXT record to DNS provider.
+
+Learn more about [Front Door Profile - ValidateDomainOwnership (Validate domain ownership by adding DNS TXT record to DNS provider.)](/azure/frontdoor/standard-premium/how-to-add-custom-domain#domain-validation-state).
+
+### Revalidate domain ownership for the Azure Front Door managed certificate renewal
+
+Azure Front Door can't automatically renew the managed certificate because the domain isn't CNAME mapped to AFD endpoint. Revalidate domain ownership for the managed certificate to be automatically renewed.
+
+Learn more about [Front Door Profile - RevalidateDomainOwnership (Revalidate domain ownership for the Azure Front Door managed certificate renewal)](/azure/frontdoor/standard-premium/how-to-add-custom-domain#domain-validation-state).
+
+### Renew the expired Azure Front Door customer certificate to avoid service disruption
+
+Some of the customer certificates for Azure Front Door Standard and Premium profiles expired. Renew the certificate in time to avoid service disruption.
+
+Learn more about [Front Door Profile - RenewExpiredBYOC (Renew the expired Azure Front Door customer certificate to avoid service disruption.)](/azure/frontdoor/standard-premium/how-to-configure-https-custom-domain#use-your-own-certificate).
+
+### Cloud Services (classic) is retiring. Migrate off before 31 August 2024
+
+Cloud Services (classic) is retiring. Migrate off before 31 August 2024 to avoid any loss of data or business continuity.
+
+Learn more about [Resource - Cloud Services Retirement (Cloud Services (classic) is retiring. Migrate off before 31 August 2024)](https://aka.ms/ExternalRetirementEmailMay2022).
-By migrating virtual machines to Availability Zones, you can ensure the isolation of your VMs from potential failures in other zones, and you can expect enhanced resiliency in your workload by avoiding downtime and business interruptions.
+## Cognitive Services
-Learn more about [Availability Zones](../reliability/availability-zones-overview.md).
+### Quota Exceeded for this resource
+
+We have detected that the quota for your resource has been exceeded. You can wait for it to automatically get replenished soon, or to get unblocked and use the resource again now, you can upgrade it to a paid SKU.
+
+Learn more about [Cognitive Service - CognitiveServiceQuotaExceeded (Quota Exceeded for this resource)](/azure/cognitive-services/plan-manage-costs#pay-as-you-go).
+
+### Upgrade your application to use the latest API version from Azure OpenAI
+
+We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality.
+
+Learn more about [Cognitive Service - CogSvcApiVersionOpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference).
+
+### Upgrade your application to use the latest API version from Azure OpenAI
+
+We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality.
+
+Learn more about [Cognitive Service - API version: OpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference).
+
+### Upgrade your application to use the latest API version from Azure OpenAI
+
+We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality.
+
+Learn more about [Cognitive Service - API version: OpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference).
+
+## Compute
### Enable Backups on your Virtual Machines
Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on yo
### Upgrade the standard disks attached to your premium-capable VM to premium disks
-We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
+We have identified that you're using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore). ### Enable virtual machine replication to protect your applications from regional outage
-Virtual machines that do not have replication enabled to another region, are not resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the below list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
+Virtual machines that don't have replication enabled to another region aren't resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the below list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms). ### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no extra cost
Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost
### Update your outbound connectivity protocol to Service Tags for Azure Site Recovery
-Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. It is advised to use Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines.
+Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. We advise using Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines.
Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](https://aka.ms/azure-site-recovery-using-service-tags).
+### Update your firewall configurations to allow new RHUI 4 IPs
+
+Your Virtual Machine Scale Sets will start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
+
+Learn more about [Virtual machine scale set - Rhui3ToRhui4MigrationVMSS (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list).
+
+### Update your firewall configurations to allow new RHUI 4 IPs
+
+Your Virtual Machine Scale Sets will start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
+
+Learn more about [Virtual machine - Rhui3ToRhui4MigrationV2 (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list).
+
+### Virtual Machines in your subscription are running on images that have been scheduled for deprecation
+
+Virtual Machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer Offer of the image to prevent disruption to your workloads.
+
+Learn more about [Virtual machine - VMRunningDeprecatedOfferLevelImage (Virtual Machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+
+### Virtual Machines in your subscription are running on images that have been scheduled for deprecation
+
+Virtual Machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer SKU of the image to prevent disruption to your workloads.
+
+Learn more about [Virtual machine - VMRunningDeprecatedPlanLevelImage (Virtual Machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+
+### Virtual Machines in your subscription are running on images that have been scheduled for deprecation
+
+Virtual Machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer version of the image to prevent disruption to your workloads.
+
+Learn more about [Virtual machine - VMRunningDeprecatedImage (Virtual Machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+
+### Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation
+
+Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to newer Offer of the image to prevent disruption to your workload.
+
+Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedOfferImage (Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+
+### Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation
+
+Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to newer version of the image to prevent disruption to your workload.
+
+Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedImage (Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+
+### Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation
+
+Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to newer Plan of the image to prevent disruption to your workload.
+
+Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedPlanImage (Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ).
+
+### Use Availability zones for better resiliency and availability
+
+Availability Zones (AZ) in Azure help protect your applications and data from datacenter failures. Each AZ is made up of one or more datacenters equipped with independent power, cooling, and networking. By designing solutions to use zonal VMs, you can isolate your VMs from failure in any other zone.
+
+Learn more about [Virtual machine - AvailabilityZoneVM (Use Availability zones for better resiliency and availability)](/azure/reliability/availability-zones-overview).
+ ### Use Managed Disks to improve data reliability
-Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units are not resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
+Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore). ### Check Point Virtual Machine may lose Network Connectivity.
-We have identified that your Virtual Machine might be running a version of Check Point image that has been known to lose network connectivity in the event of a platform servicing operation. It is recommended that you upgrade to a newer version of the image. Contact Check Point for further instructions on how to upgrade your image.
+We have identified that your Virtual Machine might be running a version of Check Point image that has been known to lose network connectivity in the event of a platform servicing operation. We recommend that you upgrade to a newer version of the image. Contact Check Point for further instructions on how to upgrade your image.
Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Check Point Virtual Machine may lose Network Connectivity.)](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk151752&partition=Advanced&product=CloudGuard).
+### Use Managed Disks to improve data reliability
+
+Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
+
+Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
+ ### Access to mandatory URLs missing for your Azure Virtual Desktop environment In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to the allowed list, in case your virtual machine runs in a restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade
### Configure your Azure Cosmos DB containers with a partition key
-Your Azure Cosmos DB non-partitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
+Your Azure Cosmos DB nonpartitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
Learn more about [Azure Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](../cosmos-db/partitioning-overview.md#choose-partitionkey).
Learn more about [Azure Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use
### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated
-There is a critical bug in version 2.6.13 and lower of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: This is a critical hotfix for the Async Java SDK v2, however it is still highly recommended you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md).
+There is a critical bug in version 2.6.13 and lower of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: This is a critical hotfix for the Async Java SDK v2, however we still highly recommend you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md).
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](../cosmos-db/sql/sql-api-sdk-async-java.md).
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgra
### Upgrade your Azure Fluid Relay client library
-You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, please refer to the article.
+You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, see the following article.
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework).
Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Old
### Enable critical updates to be applied to your HDInsight clusters
-HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
+HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before January 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 13, 2021 05:00 PM UTC and January 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). ### Drop and recreate your HDInsight clusters to apply critical updates
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we're unable to apply the certificate updates on some of your clusters.
Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). ### Drop and recreate your HDInsight clusters to apply critical updates
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters. Drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we're unable to apply the certificate updates on some of your clusters. Drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable.
Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). ### Apply critical updates to your HDInsight clusters
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team is performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before January 21, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). ### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021
-You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) are retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 are deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more details, see 'Learn More' link or contact us at askhdinsight@microsoft.com
+You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) are retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 are deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more information, see 'Learn More' link or contact us at askhdinsight@microsoft.com
Learn more about [HDInsight cluster - VM Deprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade
### Increase Media Services quotas or limits to ensure continuity of service.
-Please be advised that your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Do not create additional Azure Media accounts in an attempt to obtain higher limits.
+Your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Don't create additional Azure Media accounts in an attempt to obtain higher limits.
Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/).
Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more
### Move to production gateway SKUs from Basic gateways
-The VPN gateway Basic SKU is designed for development or testing scenarios. Move to a production SKU if you are using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
+The VPN gateway Basic SKU is designed for development or testing scenarios. Move to a production SKU if you're using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](https://aka.ms/aa_basicvpngateway_learnmore). ### Add at least one more endpoint to the profile, preferably in another Azure region
-Profiles should have more than one endpoint to ensure availability if one of the endpoints fails. It is also recommended that endpoints be in different regions.
+Profiles should have more than one endpoint to ensure availability if one of the endpoints fails. We also recommend that endpoints be in different regions.
Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](https://aka.ms/AA1o0x4). ### Add an endpoint configured to "All (World)"
-For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles avoids traffic black holing and guarantee service remains available.
+For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there's no predefined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles avoids traffic black holing and guarantee service remains available.
Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \""All (World)\"")](https://aka.ms/Rf7vc5).
Learn more about [Traffic Manager profile - ProximityProfile (Add or move one en
### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency
-We have detected that your ExpressRoute gateway only has 1 ExpressRoute circuit associated to it. Connect 1 or more additional circuits to your gateway to ensure peering location redundancy and resiliency
+We have detected that your ExpressRoute gateway only has 1 ExpressRoute circuit associated to it. Connect one or more additional circuits to your gateway to ensure peering location redundancy and resiliency
Learn more about [Virtual network gateway - ExpressRouteGatewayRedundancy (Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency)](../expressroute/designing-for-high-availability-with-expressroute.md). ### Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit
-We have detected that your ExpressRoute circuit is not currently being monitored by ExpressRoute Monitor on Network Performance Monitor. ExpressRoute monitor provides end-to-end monitoring capabilities including: Loss, latency, and performance from on-premises to Azure and Azure to on-premises
+We have detected that your ExpressRoute circuit isn't currently being monitored by ExpressRoute Monitor on Network Performance Monitor. ExpressRoute monitor provides end-to-end monitoring capabilities including: Loss, latency, and performance from on-premises to Azure and Azure to on-premises
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit)](../expressroute/how-to-npm.md).
Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname o
### Use ExpressRoute Global Reach to improve your design for disaster recovery
-You appear to have ExpressRoute circuits peered in at least two different locations. Connect them to each other using ExpressRoute Global Reach to allow traffic to continue flowing between your on-premises network and Azure environments in the event of one circuit losing connectivity. You can establish Global Reach connections between circuits in different peering locations within the same metro or across metros.
+You appear to have ExpressRoute circuits peered in at least two different locations. Connect them to each other using ExpressRoute Global Reach to allow traffic to continue flowing between your on-premises network and Azure environments if one circuit losing connectivity. You can establish Global Reach connections between circuits in different peering locations within the same metro or across metros.
Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](../expressroute/about-upgrade-circuit-bandwidth.md).
Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Acti
### Enable soft delete for your Recovery Services vaults
-Soft delete helps you retain your backup data in the Recovery Services vault for an additional duration after deletion, giving you an opportunity to retrieve it before it is permanently deleted.
+Soft delete helps you retain your backup data in the Recovery Services vault for an additional duration after deletion, giving you an opportunity to retrieve it before it's permanently deleted.
Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](../backup/backup-azure-security-feature-cloud.md).
Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Rest
### You are close to exceeding storage quota of 2GB. Create a Standard search service.
-You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded.
+You're close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded.
-Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
+Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.
-You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded.
+You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded.
-Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
+Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
### You are close to exceeding your available storage quota. Add additional partitions if you need more storage.
-You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
+you're close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
-Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](https://aka.ms/azs/search-limits-quotas-capacity).
+Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
## Storage
Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to pro
### Use Managed Disks for storage accounts reaching capacity limit
-We have identified that you are using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that do not have account capacity limit. This migration can be done through the portal in less than 5 minutes.
+We have identified that you're using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that don't have account capacity limit. This migration can be done through the portal in less than 5 minutes.
Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota).
Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider u
### Application code should be fixed as worker process crashed due to Unhandled Exception
-We identified the below thread resulted in an unhandled exception for your App and application code should be fixed to prevent impact to application availability. A crash happens when an exception in your code goes un-handled and terminates the process.
+We identified the below thread resulted in an unhandled exception for your App and application code should be fixed to prevent impact to application availability. A crash happens when an exception in your code terminates the process.
Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code should be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html).
ai-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-container-support.md
Azure AI containers provide the following set of Docker containers, each of whic
| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/sentiment/about)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Language service][ta-containers-health] | **Text Analytics for health** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/healthcare/about))| Extract and label medical information from unstructured clinical text. | Generally available | | [Language service][ta-containers-cner] | **Custom Named Entity Recognition** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/customner/about))| Extract named entities from text, using a custom model you create using your data. | Preview |
+| [Language service][ta-containers-summarization] | **Summarization** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/summarization/about))| Summarize text from various sources. | Gated - [request access](https://aka.ms/csgate-summarization). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Translator][tr-containers] | **Translator** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/about))| Translate text in several languages and dialects. | Generally available. Gated - [request access](https://aka.ms/csgate-translator). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). | ### Speech containers
Install and explore the functionality provided by containers in Azure AI service
* [Speech Service API containers][sp-containers] * [Language service containers][ta-containers] * [Translator containers][tr-containers]
+* [Summarization containers][su-containers]
<!--* [Personalizer containers](https://go.microsoft.com/fwlink/?linkid=2083928&clcid=0x409) -->
Install and explore the functionality provided by containers in Azure AI service
[ta-containers-sentiment]: language-service/sentiment-opinion-mining/how-to/use-containers.md [ta-containers-health]: language-service/text-analytics-for-health/how-to/use-containers.md [ta-containers-cner]: language-service/custom-named-entity-recognition/how-to/use-containers.md
+[ta-containers-summarization]: language-service/summarization/how-to/use-containers.md
[tr-containers]: translator/containers/translator-how-to-install-container.md [request-access]: https://aka.ms/csgate
ai-services Configure Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/configure-containers.md
Language service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers. This article applies to the following containers:
-* sentiment analysis
-* language detection
-* key phrase extraction
-* Text Analytics for health
+* Sentiment Analysis
+* Language Detection
+* Key Phrase Extraction
+* Text Analytics for Health
+* Summarization
## Configuration settings
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/overview.md
Use Language service containers to deploy API features on-premises. These Docker
* [Key phrase extraction](key-phrase-extraction/how-to/use-containers.md) * [Custom Named Entity Recognition](custom-named-entity-recognition/how-to/use-containers.md) * [Text Analytics for health](text-analytics-for-health/how-to/use-containers.md)-
+* [Summarization](summarization/how-to/use-containers.md)
## Responsible AI
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/use-containers.md
+
+ Title: Use summarization Docker containers on-premises
+
+description: Use Docker containers for the summarization API to summarize text, on-premises.
++++++ Last updated : 08/15/2023+
+keywords: on-premises, Docker, container
++
+# Use summarization Docker containers on-premises
+
+Containers enable you to host the Summarization API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Summarization remotely, then containers might be a good option.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/).
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). For disconnected containers, the DC0 tier is required.
+* For disconnected containers,
++
+## Host computer requirements and recommendations
++
+The following table describes the minimum and recommended specifications for the summarization container skills. Listed CPU/memory combinations are for a 4000 token input (conversation consumption is for all the aspects in the same request).
+
+| Container Type | Recommended number of CPU cores | Recommended memory | Notes |
+|-|-|--|-|
+| Summarization CPU container| 16 | 48 GB | |
+| Summarization GPU container| 2 | 24 GB | Requires an Nvidia GPU that supports Cuda 11.8 with 16GB VRAM.|
+
+CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
++
+## Get the container image with `docker pull`
+
+The Summarization container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `summarization`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization`
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization).
+
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from the Microsoft Container Registry.
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:cpu
+```
+for CPU containers,
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:gpu
+```
+for GPU containers.
++
+## Download the summarization models
+
+A pre-requisite for running the summarization container is to download the models first. This can be done by running one of the following commands:
+
+```bash
+docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization downloadModels=ExtractiveSummarization billing={ENDPOINT_URI} apikey={API_KEY}
+docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization downloadModels=AbstractiveSummarization billing={ENDPOINT_URI} apikey={API_KEY}
+docker run -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization downloadModels=ConversationSummarization billing={ENDPOINT_URI} apikey={API_KEY}
+```
+It's not recommended to download models for all skills inside the same `HOST_MODELS_PATH`, as the container loads all models inside the `HOST_MODELS_PATH`. Doing so would use a large amount of memory. It's recommended to only download the model for the skill you need in a particular `HOST_MODELS_PATH`.
+
+In order to ensure compatibility between models and the container, re-download the utilized models whenever you create a container using a new image version. When using a disconnected container, the license should be downloaded again after downloading the models.
+
+## Run the container with `docker run`
+
+Once the container is on the host computer, use the following command to run the containers. The container will continue to run until you stop it. (note the `rai_terms=accept` part)
+
+```bash
+docker run -p 5000:5000 -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization eula=accept rai_terms=accept billing={ENDPOINT_URI} apikey={API_KEY}
+```
+
+Or if you are running a GPU container, use the this command instead.
+```bash
+docker run -p 5000:5000 --gpus all -v {HOST_MODELS_PATH}:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization eula=accept rai_terms=accept billing={ENDPOINT_URI} apikey={API_KEY}
+```
+If there is more than one GPU on the machine, replace `--gpus all` with `--gpus device={DEVICE_ID}`.
++
+> [!IMPORTANT]
+> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+> * The `Eula`, `Billing`, `rai_terms` and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+
+To run the *Summarization* container, execute the following `docker run` command. Replace the placeholders below with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| **{ENDPOINT_URI}** | The endpoint for accessing the summarization API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
++
+```bash
+docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
+mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs a *Summarization* container from the container image
+* Allocates one CPU core and 4 gigabytes (GB) of memory
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+* Automatically removes the container after it exits. The container image is still available on the host computer.
++
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs.
+
+Use the host, `http://localhost:5000`, for container APIs.
+
+<!-- ## Validate container is running -->
++
+## Run the container disconnected from the internet
++
+## Stop the container
++
+## Troubleshooting
+
+If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
++
+## Billing
+
+The summarization containers send billing information to Azure, using a _Language_ resource on your Azure account.
++
+For more information about these options, see [Configure containers](../../concepts/configure-containers.md).
+
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running summarization containers. In summary:
+
+* Summarization provides Linux containers for Docker
+* Container images are downloaded from the Microsoft Container Registry (MCR).
+* Container images run in Docker.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> This container is not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
ai-services Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/region-support.md
Some summarization features are only available in limited regions. More regions
## Regional availability table
-|Region|Document abstractive summarization|Conversation issue and resolution summarization|Conversation narrative summarization with chapters|Custom summarization|
-||||||-|
-|North Europe|&#9989;|&#9989;|&#9989;|&#10060;|
-|East US|&#9989;|&#9989;|&#9989;|&#9989;|
-|UK South|&#9989;|&#9989;|&#9989;|&#10060;|
-|Southeast Asia|&#9989;|&#9989;|&#9989;|&#10060;|
+|Region |Document abstractive summarization|Conversation issue and resolution summarization|Conversation narrative summarization with chapters|Custom summarization|
+||-|--|--|--|
+|Azure Gov Virginia|&#9989; |&#9989; |&#9989; |&#9989; |
+|North Europe |&#9989; |&#9989; |&#9989; |&#10060; |
+|East US |&#9989; |&#9989; |&#9989; |&#9989; |
+|UK South |&#9989; |&#9989; |&#9989; |&#10060; |
+|Southeast Asia |&#9989; |&#9989; |&#9989; |&#10060; |
## Next steps
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
keywords:
## September 2023 ### GPT-4
-GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Customers no longer need to request access to use GPT-4 and GPT-4-32k. Availability may be limited by region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Customers no longer need to apply for the waitlist to use GPT-4 and GPT-4-32k (the Limited Access registration requirements continue to apply for all Azure OpenAI models). Availability may vary by region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
### GPT-3.5 Turbo Instruct
ai-services Speech Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-overview.md
Previously updated : 09/11/2023 Last updated : 10/2/2023 keywords: on-premises, Docker, container
The following table lists the Speech containers available in the Microsoft Conta
| Container | Features | Supported versions and locales | |--|--|--|
-| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.1.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
-| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.1.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
+| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.3.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
+| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.3.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
| [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |
-| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.15.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
+| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.17.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. <sup>2</sup> Not available as a disconnected container.
ai-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-sdk.md
Previously updated : 09/16/2022 Last updated : 10/02/2023
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
The following sections provide you with a quick guide to the quotas and limits t
For information about adjustable quotas for Standard (S0) Speech resources, see [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). The quotas and limits for Free (F0) Speech resources aren't adjustable.
+> [!IMPORTANT]
+> If you switch a Speech resource from Free (F0) to Standard (S0) pricing tier, the change of the corresponding quotas may take up to several hours.
+ ### Speech to text quotas and limits per resource This section describes speech to text quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable.
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
Last updated 02/03/2023
-# Automatically upgrade Azure Kubernetes Service cluster node operating system images
+# Automatically upgrade Azure Kubernetes Service cluster node operating system images
AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, can't be used for cluster-level Kubernetes version upgrades. To automatically upgrade Kubernetes versions, continue to use the cluster [auto-upgrade][Autoupgrade] channel.
It's highly recommended to use both cluster-level [auto-upgrades][Autoupgrade] a
The selected channel determines the timing of upgrades. When making changes to node OS auto-upgrade channels, allow up to 24 hours for the changes to take effect. > [!NOTE]
-> Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it will only work for a cluster in a [supported version][supported].
-
+> Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it only works for a cluster in a [supported version][supported].
The following upgrade channels are available. You're allowed to choose one of these options: |Channel|Description|OS-specific behavior| |||
-| `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A|
-| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially and will be patched at some point by the OS's infrastructure.|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`. Azure Linux CPU node pools don't automatically apply security patches, so this option behaves equivalently to `None`.|
-| `SecurityPatch`|This channel is in preview and requires enabling the feature flag `NodeOsUpgradeChannelPreview`. Refer to the prerequisites section for details. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There may be disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs. `SecurityPatch` will work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.|
+| `None`| Your nodes don't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A|
+| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially. The OS's infrastructure patches them at some point.|Ubuntu and Azure Linux (CPU node pools) apply security patches through unattended upgrade/dnf-automatic roughly once per day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`.|
+| `SecurityPatch`|This channel is in preview and requires enabling the feature flag `NodeOsUpgradeChannelPreview`. Refer to the prerequisites section for details. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There may be disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs. `SecurityPatch` works on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.|
| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. Node image upgrades will work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.| To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
By making use of eBPF programs loaded into the Linux kernel and a more efficient
Azure CNI Powered by Cilium can be deployed using two different methods for assigning pod IPs: -- Assign IP addresses from a virtual network (similar to existing Azure CNI with Dynamic Pod IP Assignment)- - Assign IP addresses from an overlay network (similar to Azure CNI Overlay mode)
+- Assign IP addresses from a virtual network (similar to existing Azure CNI with Dynamic Pod IP Assignment)
+ If you aren't sure which option to select, read ["Choosing a network model to use"](./azure-cni-overlay.md#choosing-a-network-model-to-use). ## Network Policy Enforcement
Azure CNI powered by Cilium currently has the following limitations:
## Create a new AKS Cluster with Azure CNI Powered by Cilium
-### Option 1: Assign IP addresses from a virtual network
+### Option 1: Assign IP addresses from an overlay network
+
+Use the following commands to create a cluster with an overlay network and Cilium. Replace the values for `<clusterName>`, `<resourceGroupName>`, and `<location>`:
+
+```azurecli-interactive
+az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --pod-cidr 192.168.0.0/16 \
+ --network-dataplane cilium
+```
+
+> [!NOTE]
+> The `--network-dataplane cilium` flag replaces the deprecated `--enable-ebpf-dataplane` flag used in earlier versions of the aks-preview CLI extension.
+
+### Option 2: Assign IP addresses from a virtual network
Run the following commands to create a resource group and virtual network with a subnet for nodes and a subnet for pods.
az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
--network-dataplane cilium ```
-> [!NOTE]
-> The `--network-dataplane cilium` flag replaces the deprecated `--enable-ebpf-dataplane` flag used in earlier versions of the aks-preview CLI extension.
-
-### Option 2: Assign IP addresses from an overlay network
-
-Use the following commands to create a cluster with an overlay network and Cilium. Replace the values for `<clusterName>`, `<resourceGroupName>`, and `<location>`:
-
-```azurecli-interactive
-az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
- --network-plugin azure \
- --network-plugin-mode overlay \
- --pod-cidr 192.168.0.0/16 \
- --network-dataplane cilium
-```
- ## Upgrade an existing cluster to Azure CNI Powered by Cilium > [!NOTE]
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Microsoft provides guidance for other actions you can take to secure your worklo
AKS uses a secure tunnel communication to allow the api-server and individual node kubelets to communicate even on separate virtual networks. The tunnel is secured through mTLS encryption. The current main tunnel that is used by AKS is [Konnectivity, previously known as apiserver-network-proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/). Verify all network rules follow the [Azure required network rules and FQDNs](limit-egress-traffic.md).
+## Can my pods use the API server FQDN instead of the cluster IP?
+
+Yes, you can add the annotation `kubernetes.azure.com/set-kube-service-host-fqdn` to pods to set the `KUBERNETES_SERVICE_HOST` variable to the domain name of the API server instead of the in-cluster service IP. This is useful in cases where your cluster egress is done via a layer 7 firewall, such as when using Azure Firewall with Application Rules.
+ ## Why are two resource groups created with AKS? AKS builds upon many Azure infrastructure resources, including Virtual Machine Scale Sets, virtual networks, and managed disks. These integrations enable you to apply many of the core capabilities of the Azure platform within the managed Kubernetes environment provided by AKS. For example, most Azure virtual machine types can be used directly with AKS and Azure Reservations can be used to receive discounts on those resources automatically.
azure-arc Network Requirements Consolidated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md
Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 08/15/2023 Last updated : 09/25/2023
Connectivity to Arc-enabled server endpoints is required for:
[!INCLUDE [network-requirements](servers/includes/network-requirements.md)]
+### Subset of endpoints for ESU only
++ For more information, see [Connected Machine agent network requirements](servers/network-requirements.md). ## Azure Arc resource bridge (preview)
azure-arc Api Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/api-extended-security-updates.md
Title: Programmatically deploy and manage Azure Arc Extended Security Updates licenses description: Learn how to programmatically deploy and manage Azure Arc Extended Security Updates licenses for Windows Server 2012. Previously updated : 09/20/2023 Last updated : 10/02/2023
This article provides instructions to programmatically provision and manage Windows Server 2012 and Windows Server 2012 R2 Extended Security Updates lifecycle operations through the Azure Arc WS2012 ESU ARM APIs.
+For each of the API commands explained in this article, be sure to enter accurate parameter information for location, state, edition, type, and processors depending on your particular scenario
+ > [!NOTE]
-> For each of the API commands, be sure to enter accurate parameter information for location, state, edition, type, and processors depending on your particular scenario
+> You'll need to create a service principal to use the Azure API to manage ESUs. See [Connect hybrid machines to Azure at scale](onboard-service-principal.md) and [Azure REST API reference](/rest/api/azure/) for more information.
> + ## Provision a license To provision a license, execute the following commands:
azure-arc License Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md
In this scenario, you can deactivate or decommission the ESU Licenses associated
In this scenario, you should provision a Windows Server 2012 Datacenter license associated with 128 physical cores and link this license to the Arc-enabled Windows Server 2012 R2 VMs running on it. The deletion of the underlying VM also deletes the corresponding Arc-enabled server resource, enabling you to link another Arc-enabled server.
-### Scenario 8: A insurance customer is running a 16 node VMware cluster with 1024 cores, licensed with Windows Server Datacenter for maximum virtualization use rights. There are 120 Windows VMs ranging from 4 to 12 cores, with 44 Windows Server 2012 R2 machines with a total of 506 cores.
+### Scenario 8: An insurance customer is running a 16 node VMware cluster with 1024 cores, licensed with Windows Server Datacenter for maximum virtualization use rights. There are 120 Windows VMs ranging from 4 to 12 cores, with 44 Windows Server 2012 R2 machines with a total of 506 cores.
-In this scenario, you should purchase an Arc ESU Windows Server 2012 Datacenter edition license associated with 506 physical cores and link this license to their 44 machines. Each of the 44 machines should be onboarded to Azure Arc, and can be onboarded at scale with Arc-enabled VMware vSphere (AVS). If you migrate to AVS, these servers are eligible for free WS2012 ESUs.
+In this scenario, you should purchase an Arc ESU Windows Server 2012 Datacenter edition license associated with 506 physical cores and link this license to your 44 machines. Each of the 44 machines should be onboarded to Azure Arc, and can be onboarded at scale with Arc-enabled VMware vSphere (AVS). If you migrate to AVS, these servers are eligible for free WS2012 ESUs.
## Next steps
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 11/15/2022 Last updated : 09/25/2023
This topic describes the networking requirements for using the Connected Machine
[!INCLUDE [network-requirements](./includes/network-requirements.md)]
+## Subset of endpoints for ESU only
++ ## Next steps * Review additional [prerequisites for deploying the Connected Machine agent](prerequisites.md).
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Other Azure services through Azure Arc-enabled servers are available as well, wi
To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) and establishing a connection to Azure. -- **Deployment options:** There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md).--- **Networking:** Connectivity options include public endpoint, proxy server, and private link or Azure Express Route. Review the [networking prerequisites](network-requirements.md) to prepare their non-Azure environment for deployment to Azure Arc.- We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy one month before Windows Server 2012 end of support. Billing for this service starts from October 2023, after Windows Server 2012 end of support. > [!NOTE] > In order to purchase ESUs, you must have Software Assurance through Volume Licensing Programs such as an Enterprise Agreement (EA), Enterprise Agreement Subscription (EAS), Enrollment for Education Solutions (EES), or Server and Cloud Enrollment (SCE). Alternatively, if your Windows Server 2012/2012 R2 machines are licensed through SPLA or with a Server Subscription, Software Assurance is not required to purchase ESUs.
->
+
+### Deployment options
+
+There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md).
+
+### Networking
+
+Connectivity options include public endpoint, proxy server, and private link or Azure Express Route. Review the [networking prerequisites](network-requirements.md) to prepare non-Azure environments for deployment to Azure Arc.
++
+> [!TIP]
+> To take advantage of the full range of offerings for Arc-enabled servers, such as extensions and remote connectivity, ensure that you allow the additional URLs that apply to your scenario. For more information, see [Connected machine agent networking requirements](network-requirements.md).
+ ## Next steps * Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy).
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-administration.md
Previously updated : 07/22/2021 Last updated : 09/29/2023 # How to administer Azure Cache for Redis
-This article describes how to do administration tasks such as [rebooting](#reboot) and [scheduling updates](#schedule-updates) for your Azure Cache for Redis instances.
+This article describes how to do administration tasks such as [rebooting](#reboot) and [Update channel and Schedule updates](#update-channel-and-schedule-updates) for your Azure Cache for Redis instances.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
The effect on your client applications varies depending on which nodes you reboo
* **Primary** - When the primary node is rebooted, Azure Cache for Redis fails over to the replica node and promotes it to primary. During this failover, there may be a short interval in which connections may fail to the cache. * **Replica** - When the replica node is rebooted, there's typically no effect on the cache clients.
-* **Both primary and replica** - When both cache nodes are rebooted, Azure Cache for Redis will attempt to gracefully reboot both nodes, waiting for one to finish before rebooting the other. Typically, data loss does not occur. However, data loss can still occur do to unexpected maintenance events or failures. Rebooting your cache many times in a row increases the odds of data loss.
+* **Both primary and replica** - When both cache nodes are rebooted, Azure Cache for Redis attempts to gracefully reboot both nodes, waiting for one to finish before rebooting the other. Typically, data loss doesn't occur. However, data loss can still occur do to unexpected maintenance events or failures. Rebooting your cache many times in a row increases the odds of data loss.
* **Nodes of a premium cache with clustering enabled** - When you reboot one or more nodes of a premium cache with clustering enabled, the behavior for the selected nodes is the same as when you reboot the corresponding node or nodes of a non-clustered cache. ## Reboot FAQ
Yes, for PowerShell instructions see [To reboot an Azure Cache for Redis](cache-
No. Reboot isn't available for the Enterprise tier yet. Reboot is available for Basic, Standard and Premium tiers.The settings that you see on the Resource menu under **Administration** depend on the tier of your cache. You don't see **Reboot** when using a cache from the Enterprise tier.
-## Schedule updates
+## Flush data (preview)
-On the left, **Schedule updates** allows you to choose a maintenance window for your cache instance. A maintenance window allows you to control the day(s) and time(s) of a week during which the VM(s) hosting your cache can be updated. Azure Cache for Redis will make a best effort to start and finish updating Redis server software within the specified time window you define.
+When using the Basic, Standard, or Premium tiers of Azure Cache for Redis, you see **Flush data** on the resource menu. The **Flush data** operation allows you to delete or _flush_ all data in your cache. This _flush_ operation can be used before scaling operations to potentially reduce the time required to complete the scaling operation on your cache. You can also configure to run the _flush_ operation periodically on your dev/test caches to keep memory usage in check.
-> [!NOTE]
-> The maintenance window applies to Redis server updates and updates to the Operating System of the VMs hosting the cache. The maintenance window does not apply to Host OS updates to the Hosts hosting the cache VMs or other Azure Networking components. In rare cases, where caches are hosted on older models (you can tell if your cache is on an older model if the DNS name of the cache resolves to a suffix of "cloudapp.net", "chinacloudapp.cn", "usgovcloudapi.net" or "cloudapi.de"), the maintenance window won't apply to Guest OS updates either.
+The _flush_ operation, when executed on a clustered cache, clears data from all shards at the same time.
+
+> [!IMPORTANT]
+> Previously, the _flush_ operation was only available for geo-replicated Enterprise tier caches. Now, it is available in Basic, Standard and Premium tiers.
>
-> Currently, no option is available to configure a reboot or scheduled updates for an Enterprise tier cache.
+
+## Update channel and Schedule updates
+
+On the left, **Schedule updates** allows you to choose an update channel and a maintenance window for your cache instance.
+
+Any cache instance using the **Stable** update channel receives updates a few weeks later than cache instances using **Preview** update channel. We recommend choosing the **Preview** update channel for your non-production and less critical workloads. Choose the **Stable** update channel for your most critical, production workloads. All caches default to the **Stable** update channel by default.
+
+> [!IMPORTANT]
+> Changing the update channel on your cache instance results in your cache undergoing a patching event to apply the right updates. Consider changing the update channel during your maintenance window.
+>
+
+A maintenance window allows you to control the days and times of a week during which the VMs hosting your cache can be updated. Azure Cache for Redis makes a best effort to start and finish updating Redis server software within the specified time window you define.
+
+> [!IMPORTANT]
+> The update channel and maintenance window applies to Redis server updates and updates to the Operating System of the VMs hosting the cache. The update channel and maintenance window does not apply to Host OS updates to the Hosts hosting the cache VMs or other Azure Networking components. In rare cases, where caches are hosted on older models the maintenance window won't apply to Guest OS updates either. You can tell if your cache is on an older model if the DNS name of the cache resolves to a suffix of `cloudapp.net`, `chinacloudapp.cn`, `usgovcloudapi.net` or `cloudapi.de`.
>
+Currently, no option is available to configure an update channel or scheduled updates for an Enterprise tier cache.
:::image type="content" source="media/cache-administration/redis-schedule-updates-2.png" alt-text="Screenshot showing schedule updates":::
azure-cache-for-redis Cache Best Practices Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-connection.md
Previously updated : 02/27/2023 Last updated : 09/29/2023
Test your system's resiliency to connection breaks using a [reboot](cache-admini
## TCP settings for Linux-hosted client applications
-The default TCP settings in some Linux versions can cause Redis server connections to fail for 13 minutes or more. The default settings can prevent the client application from detecting closed connections and restoring them automatically if the connection wasn't closed gracefully.
+The default TCP settings in some Linux versions can cause Redis server connections to fail for 13 minutes or more. The default settings can prevent the client application from detecting closed connections and restoring them automatically if the connection wasn't closed gracefully.
The failure to reestablish a connection can happen in situations where the network connection is disrupted or the Redis server goes offline for unplanned maintenance.
Use notifications to learn of upcoming maintenance. For more information, see [C
## Schedule maintenance window
-Adjust your cache settings to accommodate maintenance. For more information about creating a maintenance window to reduce any negative effects to your cache, see [Schedule updates](cache-administration.md#schedule-updates).
+Adjust your cache settings to accommodate maintenance. For more information about creating a maintenance window to reduce any negative effects to your cache, see [Update channel and Schedule updates](cache-administration.md#update-channel-and-schedule-updates).
## More design patterns for resilience
Apply design patterns for resiliency. For more information, see [How do I make m
## Idle timeout
-Azure Cache for Redis has a 10-minute timeout for idle connections. The 10-minute timeout allows the server to automatically clean up leaky connections or connections orphaned by a client application. Most Redis client libraries have a built-in capability to send `heartbeat` or `keepalive` commands periodically to prevent connections from being closed even if there are no requests from the client application.
+Azure Cache for Redis has a 10-minute timeout for idle connections. The 10-minute timeout allows the server to automatically clean up leaky connections or connections orphaned by a client application. Most Redis client libraries have a built-in capability to send `heartbeat` or `keepalive` commands periodically to prevent connections from being closed even if there are no requests from the client application.
If there's any risk of your connections being idle for 10 minutes, configure the `keepalive` interval to a value less than 10 minutes. If your application is using a client library that doesn't have native support for `keepalive` functionality, you can implement it in your application by periodically sending a `PING` command.
-## Next steps
+## Related content
- [Best practices for development](cache-best-practices-development.md) - [Azure Cache for Redis development FAQ](cache-development-faq.yml)
azure-cache-for-redis Cache Best Practices Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-scale.md
For more information on scaling and memory, depending on your tier see either:
## Minimizing your data helps scaling complete quicker
-If preserving the data in the cache isn't a requirement, consider flushing the data prior to scaling. Flushing the cache helps the scaling operation complete more quickly so the new capacity is available sooner.
+If preserving the data in the cache isn't a requirement, consider flushing the data prior to scaling. Flushing the cache helps the scaling operation complete more quickly so the new capacity is available sooner. See more details on [how to initiate flush operation.](cache-administration.md#flush-data-preview)
## Scaling Enterprise tier caches
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
Previously updated : 11/21/2022 Last updated : 09/29/2023
Use the **Maxmemory policy**, **maxmemory-reserved**, and **maxfragmentationmemo
**Maxmemory policy** configures the eviction policy for the cache and allows you to choose from the following eviction policies: -- `volatile-lru`: The default eviction policy, removes the least recently used key out of all the keys with an expiration set.
+- `volatile-lru`: The default eviction policy. It removes the least recently used key out of all the keys with an expiration set.
- `allkeys-lru`: Removes the least recently used key. - `volatile-random`: Removes a random key that has an expiration set. - `allkeys-random`: Removes a random key.
The **Schedule updates** section on the left allows you to choose a maintenance
To specify a maintenance window, check the days you want. Then, specify the maintenance window start hour for each day, and select **OK**. The maintenance window time is in UTC.
-For more information and instructions, see [Azure Cache for Redis administration - Schedule updates](cache-administration.md#schedule-updates)
+For more information and instructions, see [Update channel and Schedule updates](cache-administration.md#update-channel-and-schedule-updates).
### Geo-replication
New Azure Cache for Redis instances are configured with the following default Re
| `lua-event-limit` |500 |Max size of script event queue. | | `client-output-buffer-limit normal` / `client-output-buffer-limit pubsub` |`0 0 0` / `32mb 8mb 60` |The client output buffer limits can be used to force disconnection of clients that aren't reading data from the server fast enough for some reason. A common reason is that a Pub/Sub client can't consume messages as fast as the publisher can produce them. For more information, see [https://redis.io/topics/clients](https://redis.io/topics/clients). |
-<a name="databases"></a>
+### Databases
<sup>1</sup>The limit for `databases` is different for each Azure Cache for Redis pricing tier and can be set at cache creation. If no `databases` setting is specified during cache creation, the default is 16.
For more information about databases, see [What are Redis databases?](cache-deve
> The `databases` setting can be configured only during cache creation and only using PowerShell, CLI, or other management clients. For an example of configuring `databases` during cache creation using PowerShell, see [New-AzRedisCache](cache-how-to-manage-redis-cache-powershell.md#databases). >
-<a name="maxclients"></a>
+### Maxclients
-<sup>2</sup>`maxclients` is different for each Azure Cache for Redis pricing tier.
+<sup>2</sup>The `maxclients` property is different for each Azure Cache for Redis pricing tier.
- Basic and Standard caches - C0 (250 MB) cache - up to 256 connections
For cache instances using active geo-replication, the following commands are als
For more information about Redis commands, see [https://redis.io/commands](https://redis.io/commands).
-## Next steps
+## Related content
- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-) - [Monitor Azure Cache for Redis](cache-how-to-monitor.md)-
azure-cache-for-redis Cache Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-failover.md
Previously updated : 11/16/2022 Last updated : 09/29/2023
Refer to these design patterns to build resilient clients, especially the circui
To test a client application's resiliency, use a [reboot](cache-administration.md#reboot) as a manual trigger for connection breaks.
-Additionally, we recommend that you [schedule updates](cache-administration.md#schedule-updates) on a cache to apply Redis runtime patches during specific weekly windows. These windows are typically periods when client application traffic is low, to avoid potential incidents.
+Additionally, we recommend that you [Update channel and Schedule updates](cache-administration.md#update-channel-and-schedule-updates) on a cache to apply Redis runtime patches during specific weekly windows. These windows are typically periods when client application traffic is low, to avoid potential incidents.
For more information, see [Connection resilience](cache-best-practices-connection.md).
-## Next steps
+## Related content
-- [Schedule updates](cache-administration.md#schedule-updates) for your cache.-- Test application resiliency by using a [reboot](cache-administration.md#reboot).-- [Configure](cache-configure.md#memory-policies) memory reservations and policies.
+- [Update channel and Schedule updates](cache-administration.md#update-channel-and-schedule-updates)
+- Test application resiliency by using a [reboot](cache-administration.md#reboot)
+- [Configure](cache-configure.md#memory-policies) memory reservations and policies
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Previously updated : 09/26/2023 Last updated : 09/29/2023 # What is Azure Cache for Redis?
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| [Redis Modules](cache-redis-modules.md) |-|-|-|Γ£ö|Preview| | [Import/Export](cache-how-to-import-export-data.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Reboot](cache-administration.md#reboot) |Γ£ö|Γ£ö|Γ£ö|-|-|
-| [Scheduled updates](cache-administration.md#schedule-updates) |Γ£ö|Γ£ö|Γ£ö|-|-|
+| [Update channel and Schedule updates](cache-administration.md#update-channel-and-schedule-updates) |Γ£ö|Γ£ö|Γ£ö|-|-|
> [!NOTE]
-> The Enterprise Flash tier currently supports only the RediSearch module (in preview) and the RedisJSON module.
+> The Enterprise Flash tier currently supports only the RediSearch module (in preview) and the RedisJSON module.
### Choosing the right tier Consider the following options when choosing an Azure Cache for Redis tier: - **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tier 4 GB - 2 TB, and the Enterprise Flash tier 300 GB - 4.5 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).-- **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. The Enterprise tier typically has the best performance for most workloads, especially with larger cache instances. For more information, see [Performance testing](cache-best-practices-performance.md). -- **Dedicated core for Redis server**: All caches except C0 run dedicated vCPUs. The Basic, Standard, and Premium tiers run open source Redis, which by design uses only one thread for command processing. On these tiers, having more vCPUs usually improves throughput performance because Azure Cache for Redis uses other vCPUs for I/O processing or for OS processes. However, adding more vCPUs per instance may not produce linear performance increases. Scaling out usually boosts performance more than scaling up in these tiers. Enterprise and Enterprise Flash tier caches run on Redis Enterprise which is able to utilize multiple vCPUs per instance, which can also significantly increase performance over other tiers. For Enterprise and Enterprise flash tiers, scaling up is recommended before scaling out. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization).
+- **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. The Enterprise tier typically has the best performance for most workloads, especially with larger cache instances. For more information, see [Performance testing](cache-best-practices-performance.md).
+- **Dedicated core for Redis server**: All caches except C0 run dedicated vCPUs. The Basic, Standard, and Premium tiers run open source Redis, which by design uses only one thread for command processing. On these tiers, having more vCPUs usually improves throughput performance because Azure Cache for Redis uses other vCPUs for I/O processing or for OS processes. However, adding more vCPUs per instance may not produce linear performance increases. Scaling out usually boosts performance more than scaling up in these tiers. Enterprise and Enterprise Flash tier caches run on Redis Enterprise which is able to utilize multiple vCPUs per instance, which can also significantly increase performance over other tiers. For Enterprise and Enterprise flash tiers, scaling up is recommended before scaling out. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization).
- **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. Higher bandwidth limits help you avoid network saturation that cause timeouts in your application.For more information, see [Performance testing](cache-best-practices-performance.md). - **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache. - **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss.
azure-cache-for-redis Cache Troubleshoot Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md
Previously updated : 12/30/2021 Last updated : 09/29/2023 # Troubleshoot Azure Cache for Redis latency and timeouts
This section discusses troubleshooting for latency and timeout issues that occur
- [StackExchange.Redis timeout exceptions](#stackexchangeredis-timeout-exceptions) > [!NOTE]
-> Several of the troubleshooting steps in this guide include instructions to run Redis commands and monitor various performance metrics. For more information and instructions, see the articles in the [Additional information](#additional-information) section.
+> Several of the troubleshooting steps in this guide include instructions to run Redis commands and monitor various performance metrics. For more information and instructions, see the articles in the [Additional information](#related-content) section.
> ## Client-side troubleshooting
Because of optimistic TCP settings in Linux, client applications hosted on Linux
If you're using `RedisSessionStateProvider`, ensure you have set the retry timeout correctly. The `retryTimeoutInMilliseconds` value should be higher than the `operationTimeoutInMilliseconds` value. Otherwise, no retries occur. In the following example, `retryTimeoutInMilliseconds` is set to 3000. For more information, see [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md) and [How to use the configuration parameters of Session State Provider and Output Cache Provider](https://github.com/Azure/aspnet-redis-providers/wiki/Configuration). ```xml
-
<add name="AFRedisCacheSessionStateProvider" type="Microsoft.Web.Redis.RedisSessionStateProvider"
If you're using `RedisSessionStateProvider`, ensure you have set the retry timeo
Planned or unplanned maintenance can cause disruptions with client connections. The number and type of exceptions depends on the location of the request in the code path, and when the cache closes its connections. For instance, an operation that sends a request but hasn't received a response when the failover occurs might get a time-out exception. New requests on the closed connection object receive connection exceptions until the reconnection happens successfully.
-For information, check these other sections:
+For more information, check these other sections:
-- [Scheduling updates](cache-administration.md#schedule-updates)
+- [Update channel and Schedule updates](cache-administration.md#update-channel-and-schedule-updates)
- [Connection resilience](cache-best-practices-connection.md#connection-resilience) - `AzureRedisEvents` [notifications](cache-failover.md#can-i-be-notified-in-advance-of-planned-maintenance)
To mitigate situations where network bandwidth usage is close to maximum capacit
For more specific information to address timeouts when using StackExchange.Redis, see [Investigating timeout exceptions in StackExchange.Redis](https://azure.microsoft.com/blog/investigating-timeout-exceptions-in-stackexchange-redis-for-azure-redis-cache/).
-## Additional information
-
-See these articles for additional information about latency issues and timeouts.
+## Related content
- [Troubleshoot Azure Cache for Redis client-side issues](cache-troubleshoot-client.md) - [Troubleshoot Azure Cache for Redis server-side issues](cache-troubleshoot-server.md)
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 09/12/2023 Last updated : 09/29/2023 # What's New in Azure Cache for Redis
+## October 2023
+
+### Flush data operation for Basic, Standard and Premium Caches (preview)
+
+Basic, Standard, and Premium tier caches now support a built-in _flush_ operation that can be started at the control plane level. Use the _flush_ operation with your cache executing the `FLUSH ALL` command through Portal Console or _redis-cli_.
+
+For more information, see [flush data operation](cache-administration.md#flush-data-preview).
+
+### Update channel for Basic, Standard and Premium Caches (preview)
+
+With Basic, Standard or Premium tier caches, you can choose to receive early updates by configuring the "Preview" or the "Stable" update channel.
+
+For more information, see [update channels](cache-administration.md#update-channel-and-schedule-updates).
+ ## September 2023 ### Remove TLS 1.0 and 1.1 from use with Azure Cache for Redis
-To meet the industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later, Azure Cache for Redis is moving toward requiring the use of the TLS 1.2 in October, 2024.
+To meet the industry-wide push toward the exclusive use of Transport Layer Security (TLS) version 1.2 or later, Azure Cache for Redis is moving toward requiring the use of TLS 1.2 in October 2024.
As a part of this effort, you can expect the following changes to Azure Cache for Redis:
azure-linux Concepts Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/concepts-core.md
Previously updated : 05/10/2023 Last updated : 09/29/2023
Microsoft Azure Linux is an open-sourced project maintained by Microsoft, which
## CVE infrastructure
-One of the responsibilities of Microsoft in maintaining the Azure Linux Container Host is establishing a process for CVEs, such as identifying applicable CVEs and publishing CVE fixes, and adhering to defined Service Level Agreements (SLAs) for package fixes. For the packages included in the Azure Linux Container Host, Azure Linux scans for security vulnerabilities daily via CVEs in the [National Vulnerability Database](https://nvd.nist.gov/).
+One of the responsibilities of Microsoft in maintaining the Azure Linux Container Host is establishing a process for CVEs, such as identifying applicable CVEs and publishing CVE fixes, and adhering to defined Service Level Agreements (SLAs) for package fixes. The Azure Linux team builds and maintains the SLA for package fixes for production purposes. For more information, see the [Azure Linux package repo structure](https://github.com/microsoft/CBL-Mariner/blob/2.0/toolkit/docs/building/building.md#packagesmicrosoftcom-repository-structure). For the packages included in the Azure Linux Container Host, Azure Linux scans for security vulnerabilities twice a day via CVEs in the [National Vulnerability Database (NVD)](https://nvd.nist.gov/).
Azure Linux CVEs are published in the [Security Update Guide (SUG) Common Vulnerability Reporting Framework (CVRF) API](https://api.msrc.microsoft.com/cvrf/v2.0/swagger/index). This allows you to get detailed Microsoft security updates about security vulnerabilities that have been investigated by the [Microsoft Security Response Center (MSRC)](https://www.microsoft.com/msrc). By collaborating with MSRC, Azure Linux can quickly and consistently discover, evaluate, and patch CVEs, and contribute critical fixes back upstream.
-High and critical CVEs are taken seriously and may be released out-of-band as a package update before a new AKS node image is available. Medium and low CVEs are included in the next image release.
+High and critical CVEs are taken seriously and may be released out-of-band as a package update before a new AKS node image is available. Medium and low CVEs are included in the next image release.
+
+> [!NOTE]
+> At this time, the scan results aren't published publicly.
## Feature additions and upgrades
-Given that Microsoft owns the entire Azure Linux Container Host stack, including the CVE infrastructure and other support streams, the process of submitting a feature request is streamlined. You can communicate directly with the Microsoft team that owns the Azure Linux Container Host, which ensures an accelerated process for submitting and implementing feature requests. If you have a feature request, please file an issue on the [AKS GitHub repository](https://github.com/Azure/AKS/issues).
+Given that Microsoft owns the entire Azure Linux Container Host stack, including the CVE infrastructure and other support streams, the process of submitting a feature request is streamlined. You can communicate directly with the Microsoft team that owns the Azure Linux Container Host, which ensures an accelerated process for submitting and implementing feature requests. If you have a feature request, please file an issue on the [AKS GitHub repository](https://github.com/Azure/AKS/issues).
## Testing
azure-linux Support Cycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/support-cycle.md
+
+ Title: Azure Linux Container Host for AKS support lifecycle
+description: Learn about the support lifecycle for the Azure Linux Container Host for AKS.
++++ Last updated : 09/29/2023++
+# Azure Linux Container Host support lifecycle
+
+This article describes the support lifecycle for the Azure Linux Container Host for AKS.
+
+> [!IMPORTANT]
+> Microsoft is committed to meeting this support lifecycle and reserves the right to make changes to the support agreement and new scenarios that require modifications at any time with proper notice to customers and partners.
+
+## Image releases
+
+### Minor releases
+
+At the beginning of each month, Mariner releases a minor image version containing medium, high, and critical package updates from the previous month. This release also includes minor kernel updates and bug fixes.
+
+For more information on the CVE service level agreement (SLA), see [CVE infrastructure](./concepts-core.md#cve-infrastructure).
+
+### Major releases
+
+About every two years, Azure Linux releases a major image version containing new packages and package versions, an updated kernel, and enhancements to security, tooling, performance, and developer experience. Azure Linux releases a beta version of the major release about three months before the general availability (GA) release.
+
+Azure Linux supports previous releases for six months following the GA release of the major image version. This support window enables a smooth migration between major releases while providing stable security and support.
+
+> [!NOTE]
+> The preview version of Azure Linux 3.0 is expected to release in March 2024.
+
+## Next steps
+
+- Learn more about [Azure Linux Container Host support](./support-help.md).
azure-linux Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/support-help.md
Previously updated : 05/10/2023 Last updated : 09/29/2023 # Support and help for the Azure Linux Container Host for AKS
Here are suggestions for where you can get help when developing your solutions w
We have supporting documentation explaining how to determine, diagnose, and fix issues that you might encounter when using the Azure Linux Container Host. Use this article to troubleshoot deployment failures, security-related problems, connection issues and more. For a full list of self help troubleshooting content, see the Azure Linux Container Host troubleshooting documentation:+ - [Package upgrade](./troubleshoot-packages.md) - [Kernel versioning](./troubleshoot-kernel.md) - [Troubleshoot common issues](/troubleshoot/azure/azure-kubernetes/troubleshoot-common-mariner-aks)
Explore the range of [Azure support options and choose the plan](https://azure.m
:::image type="icon" source="./media/logos/github-logo.png" alt-text="":::
+### Get support for Azure Linux
+
+This project uses [GitHub issues](https://github.com/microsoft/CBL-Mariner/issues/new) to track [bugs](https://github.com/microsoft/CBL-Mariner/issues/new?labels=bug) and [feature requests](https://github.com/microsoft/CBL-Mariner/issues/new?labels=enhancement). For questions about using this project, see the [CBL-Mariner Tutorials repo](https://github.com/Microsoft/CBL-MarinerTutorials) and the [Contributor's Guide](https://github.com/microsoft/CBL-Mariner/blob/main/CONTRIBUTING.md).
+
+### Get support for development and management tools
+ If you need help with the language and tools used to develop and manage the Azure Linux Container Host on Azure Kubernetes Service, open an issue in its repository on GitHub. | Library | GitHub issues URL| | | | | Azure Kubernetes Service | https://github.com/Azure/AKS/issues | | Azure CLI | https://github.com/Azure/azure-cli/issues |
-<!-- | Terraform | https://github.com/Azure/terraform/issues | -->
+| CBL-Mariner | https://github.com/microsoft/CBL-Mariner/issues |
-## Stay informed of updates and new releases
+## Stay informed of updates and new releases
:::image type="icon" source="./media/logos/updates-logo.png" alt-text="":::
Learn about important product updates, roadmap, and announcements in [Azure Upda
## Next steps
-Learn more about [Azure Linux Container Host](./index.yml).
+Learn more about [Azure Linux Container Host](./index.yml).
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
Each of the following sections makes API requests by using the five different lo
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.638237&lon=-122.1324831&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.638237&lon=-122.1324831&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
In the preceding GeoJSON response, the negative distance from the main site geof
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udId={udId}&lat=47.63800&lon=-122.132531&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udId={udId}&lat=47.63800&lon=-122.132531&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
In the preceding GeoJSON response, the equipment has remained in the main site g
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63810783315048&lon=-122.13336020708084&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.63810783315048&lon=-122.13336020708084&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
In the preceding GeoJSON response, the equipment has remained in the main site g
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.637988&userTime=2023-01-16&lon=-122.1338344&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.637988&userTime=2023-01-16&lon=-122.1338344&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
In the preceding GeoJSON response, the equipment has remained in the main site g
5. Enter the following URL. The request should look like the following URL (replace `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section]). ```HTTP
- https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63799&lon=-122.134505&searchBuffer=5&isAsync=True&mode=EnterAndExit
+ https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&deviceId=device_01&udid={udid}&lat=47.63799&lon=-122.134505&searchBuffer=5&isAsync=True&mode=EnterAndExit
``` 6. Select **Send**.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Australia East * Australia Southeast * Brazil South
+* Brazil Southeast
* Canada Central * Canada East * Central India
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Australia East * Australia Southeast * Brazil South
+* Brazil Southeast
* Canada Central * Canada East * Central India
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
na Previously updated : 05/03/2023 Last updated : 10/02/2023
Azure NetApp Files customer-managed keys is supported for the following regions:
* Australia Southeast * Brazil South * Canada Central
+* Canada East
+* Central India
* Central US * East Asia * East US
Azure NetApp Files customer-managed keys is supported for the following regions:
* Southeast Asia * Sweden Central * Switzerland North
+* Switzerland West
* UAE Central * UAE North * UK South
+* UK West
+* US Gov Texas
* US Gov Virginia * West Europe * West US
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
Azure NetApp Files volume replication is supported between various [Azure region
| Australia | Australia Central | Australia Central 2 | | Australia | Australia East | Australia Southeast | | Asia-Pacific | East Asia | Southeast Asia |
+| Brazil | Brazil South | Brazil Southeast |
| Brazil/North America | Brazil South | South Central US | | Canada | Canada Central | Canada East | | Europe | North Europe | West Europe |
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
You'll need an Azure account in an Azure subscription that adheres to one of the
- **Service:** All services > Azure VMware Solution - **Resource:** General question - **Summary:** Need capacity
- - **Problem type:** Capacity Management Issues
- - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
+ - **Problem type:** Deployment
+ - **Problem subtype:** AVS Quota request
1. In the **Description** of the support ticket, on the **Details** tab, provide information for: - Region Name - Number of hosts
+ - Host SKU type
- Any other details, including Availability Zone requirements for integrating with other Azure services (e.g. Azure NetApp Files, Azure Blob Storage) >[!NOTE]
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
description: Overview of features in Azure Cloud Shell ms.contributor: jahelmic Previously updated : 03/03/2023 Last updated : 10/02/2023 tags: azure-resource-manager Title: Azure Cloud Shell features
first launch. Once completed, Cloud Shell will automatically attach your storage
sessions. Use best practices when storing secrets such as SSH keys. Services, like Azure Key Vault, have [tutorials for setup][02].
-[Learn more about persisting files in Cloud Shell.][28]
+Learn more about [Persisting files in Cloud Shell][28].
### Azure drive (Azure:)
Cloud Shell comes with the following Azure command-line tools preinstalled:
| Tool | Version | Command | | - | -- | |
-| [Azure CLI][08] | 2.51.0 | `az --version` |
+| [Azure CLI][05] | 2.51.0 | `az --version` |
| [Azure PowerShell][06] | 10.2.0 | `Get-Module Az -ListAvailable` | | [AzCopy][04] | 10.15.0 | `azcopy --version` | | [Azure Functions CLI][01] | 4.0.5198 | `func --version` |
You can verify the version of the language using the command listed in the table
- [Docker Desktop][15] - [Kubectl][20] - [Helm][19]-- [DC/OS CLI][14]
+- [D2iQ Kubernetes Platform CLI][14]
#### Databases
You can verify the version of the language using the command listed in the table
## Next steps -- [Bash in Cloud Shell Quickstart][30]-- [PowerShell in Cloud Shell Quickstart][29]
+- [Cloud Shell Quickstart][29]
- [Learn about Azure CLI][05] - [Learn about Azure PowerShell][06]
You can verify the version of the language using the command listed in the table
[11]: https://developer.hashicorp.com/packer/docs [12]: https://docs.chef.io/ [13]: https://docs.cloudfoundry.org/cf-cli/
-[14]: https://docs.d2iq.com/dkp/2.3/azure-quick-start
+[14]: https://docs.d2iq.com/dkp/2.6/azure-infrastructure
[15]: https://docs.docker.com/desktop/ [16]: https://dotnet.microsoft.com/download/dotnet/7.0 [17]: https://github.com/Azure/CloudShell/issues
You can verify the version of the language using the command listed in the table
[26]: media/features/exchangeonline.png [27]: medilets.png [28]: persisting-shell-storage.md
-[29]: quickstart-powershell.md
-[30]: quickstart.md
+[29]: quickstart.md
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
Alternatively, Communication Services direct routing supports a wildcard in the
Customers who already use Office 365 and have a domain registered in Microsoft 365 Admin Center can use SBC FQDN from the same domain.
-An example would be using `\*.contoso.com`, which would match the SBC FQDN `sbc.contoso.com`, but wouldn't match with `sbc.test.contoso.com`.
+An example would be using `*.contoso.com`, which would match the SBC FQDN `sbc.contoso.com`, but wouldn't match with `sbc.test.contoso.com`.
>[!NOTE] > SBC FQDN in Azure Communication Services direct routing must be different from SBC FQDN in Teams Direct Routing.
communication-services Domain Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/telephony/domain-validation.md
Previously updated : 03/11/2023 Last updated : 10/02/2023
This article describes the process of validating domain name ownership by using
A fully qualified domain name (FQDN) consists of two parts: host name and domain name. For example, if your session border controller (SBC) name is `sbc1.contoso.com`, then `sbc1` is the host name and `contoso.com` is the domain name. If an SBC has an FQDN of `acs.sbc1.testing.contoso.com`, then `acs` is the host name and `sbc1.testing.contoso.com` is the domain name.
-To use direct routing in Azure Communication Services, you need to validate that you own the domain part of your SBC FQDN. After that, you can configure the SBC FQDN and port number and then create voice routing rules.
+To use direct routing in Azure Communication Services, you need to validate that you own either the domain part of your SBC FQDN, or entire SBC FQDN. After that, you can configure the SBC FQDN and port number and then create voice routing rules.
-When you're verifying the domain name portion of the SBC FQDN, keep in mind that the `*.onmicrosoft.com` and `*.azure.com` domain names aren't supported. For example, if you have two domain names, `contoso.com` and `contoso.onmicrosoft.com`, use `sbc.contoso.com` as the SBC name.
+When you're verifying the ownership of the SBC FQDN, keep in mind that the `*.onmicrosoft.com` and `*.azure.com` domain names aren't supported. For example, if you have two domain names, `contoso.com` and `contoso.onmicrosoft.com`, use `sbc.contoso.com` as the SBC name.
+
+Validating domain part makes sense if you plan to add multiple SBCs from the same domain name space. For example if you're using `sbc-eu.contoso.com`, `sbc-us.contoso.com`, and `sbc-af.contoso.com` you can validate `contoso.com` domain once and add SBCs from that domain later without extra validation.
+Validating entire FQDN is helpful if you're a service provider and don't want to validate your base domain ownership with every customer. For example if you're running SBCs `customer1.acs.adatum.biz`, `customer2.acs.adatum.biz`, and `customer3.acs.adatum.biz`, you don't need to validate `acs.adatum.biz` for every Communication resource, instead you validate the entire FQDN each time. This option provides more granular security approach.
-If you're using a subdomain, make sure that this subdomain is also added and verified. For example, if you want to use `sbc.acs.contoso.com`, you need to register `acs.contoso.com`.
## Add a new domain name 1. Open the Azure portal and go to your [Communication Services resource](../../quickstarts/create-communication-resource.md). 1. On the left pane, under **Voice Calling - PSTN**, select **Direct routing**. 1. On the **Domains** tab, select **Connect domain**.
-1. Enter the domain part of the SBC FQDN.
+1. Enter the domain part of the SBC FQDN, or entire SBC FQDN.
1. Reenter the domain name. 1. Select **Confirm**, and then select **Add**.
If you're using a subdomain, make sure that this subdomain is also added and ver
[![Screenshot of verifying a custom domain.](./media/direct-routing-verify-domain-2.png)](./media/direct-routing-verify-domain-2.png#lightbox)
- It might take up to 30 minutes for a new DNS record to propagate on the internet.
+ It might take up to 30 minutes for a new DNS record to propagate on the Internet.
1. Select **Next**. If you set up everything correctly, **Domain status** should change to **Verified** next to the added domain.
If you want to remove a domain from your Azure Communication Services direct rou
### Quickstarts - [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)-- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
In this quickstart you are going to learn how to start a call from Azure Communi
4. Get Object ID of the Auto Attendant via Graph API. 5. Start a call with Azure Communication Services Calling SDK.
-If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-cte-video-calling).
+If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/voice-apps-calling).
[!INCLUDE [Enable interoperability in your Teams tenant](../../concepts/includes/enable-interoperability-for-teams-tenant.md)]
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
In this quickstart you are going to learn how to start a call from Azure Communi
4. Get Object ID of the Call Queue via Graph API. 5. Start a call with Azure Communication Services Calling SDK.
-If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-cte-video-calling).
+If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/voice-apps-calling).
[!INCLUDE [Enable interoperability in your Teams tenant](../../concepts/includes/enable-interoperability-for-teams-tenant.md)]
cosmos-db How To Enable Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-enable-audit.md
Previously updated : 06/12/2023 Last updated : 10/01/2023 # Audit logging in Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]
-> [!IMPORTANT]
-> The pgAudit extension in Azure Cosmos DB for PostgreSQL is currently in preview. This
-> preview version is provided without a service level agreement, and it's not
-> recommended for production workloads. Certain features might not be supported
-> or might have constrained capabilities.
->
-> You can see a complete list of other new features in [preview features](product-updates.md).
- Audit logging of database activities in Azure Cosmos DB for PostgreSQL is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session or object audit logging. If you want Azure resource-level logs for operations like compute and storage scaling, see the [Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md).
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 09/27/2023 Last updated : 10/01/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters.
-### September 2023
+### October 2023
+* General availability: Azure SDKs are now generally available for all Azure Cosmos DB for PostgreSQL management operations supported in REST APIs.
+ * [.NET SDK](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDBForPostgreSql/)
+ * [Go SDK](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/cosmosforpostgresql/armcosmosforpostgresql)
+ * [Java SDK](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-cosmosdbforpostgresql/)
+ * [JavaScript SDK](https://www.npmjs.com/package/@azure/arm-cosmosdbforpostgresql/)
+ * [Python SDK](https://pypi.org/project/azure-mgmt-cosmosdbforpostgresql/)
+* General availability: Azure CLI is now available for all Azure Cosmos DB for PostgreSQL management operations supported in REST APIs.
+ * See [details](/cli/azure/cosmosdb/postgres).
+* General availability: Audit logging of database activities in Azure Cosmos DB for PostgreSQL is available through the PostgreSQL pgAudit extension.
+ * See [details](./how-to-enable-audit.md).
+### September 2023
* General availability: [PostgreSQL 16](https://www.postgresql.org/docs/release/16.0/) support. * See all supported PostgreSQL versions [here](./reference-versions.md#postgresql-versions). * [Upgrade to PostgreSQL 16](./howto-upgrade.md) * General availability: [Citus 12.1 with new features and PostgreSQL 16 support](https://www.citusdata.com/updates/v12-1).
-* General availability: Data Encryption at rest using [Customer Managed Keys](./concepts-customer-managed-keys.md) is now supported for all available regions.
+* General availability: Data encryption at rest using [Customer Managed Keys](./concepts-customer-managed-keys.md) (CMK) is now supported for all available regions.
* See [this guide](./how-to-customer-managed-keys.md) for the steps to enable data encryption using customer managed keys. * Preview: Geo-redundant backup and restore * Learn more about [backup and restore Azure Cosmos DB for PostgreSQL](./concepts-backup.md)
Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
* [Geo-redundant backup and restore](./concepts-backup.md#backup-redundancy) * [32 TiB storage per node in multi-node clusters](./resources-compute.md#multi-node-cluster) * [Azure Active Directory (Azure AD) authentication](./concepts-authentication.md#azure-active-directory-authentication-preview)
-* [Azure CLI support for Azure Cosmos DB for PostgreSQL](/cli/azure/cosmosdb/postgres)
-* Azure SDKs: [.NET](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDBForPostgreSql/1.0.0-beta.1), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/cosmosforpostgresql/armcosmosforpostgresql@v0.1.0), [Java](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-cosmosdbforpostgresql/1.0.0-beta.1/overview), [JavaScript](https://www.npmjs.com/package/@azure/arm-cosmosdbforpostgresql/v/1.0.0-beta.1), and [Python](https://pypi.org/project/azure-mgmt-cosmosdbforpostgresql/1.0.0b1/)
-* [Database audit with pgAudit](./how-to-enable-audit.md)
## Contact us
databox-online Azure Stack Edge Create Vm With Custom Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-create-vm-with-custom-size.md
Previously updated : 08/28/2023 Last updated : 09/07/2023 # Customer intent: As an IT admin, I need to understand how to create VM images with custom number of cores, memory, and GPU count.
databox-online Azure Stack Edge Gpu 2309 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2309-release-notes.md
Previously updated : 09/25/2023 Last updated : 10/01/2023
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2309** release, which maps to software version **3.2.2380.1652**.
+This article applies to the **Azure Stack Edge 2309** release, which maps to software version **3.2.2380.1632**.
> [!Warning] > In this release, you must update the packet core version to AP5GC 2308 before you update to Azure Stack Edge 2309. For detailed steps, see [Azure Private 5G Core 2308 release notes](../private-5g-core/azure-private-5g-core-release-notes-2308.md).
You can update to the latest version using the following update paths:
The 2309 release has the following new features and enhancements: -- Beginning this release, you have the option of selecting Kubernetes profiles based on your workloads. You can also configure Maximum Transmission Unit (MTU) for the network interfaces on your device.-- Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible.
+- Beginning this release, you have the option of selecting Kubernetes profiles based on your workloads. You can select the Kubernetes workload profiles via the [local UI](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-compute-ips-1) or via the [PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-workload-profiles) of your device.
+- In this release, you can configure Maximum Transmission Unit (MTU) for the network interfaces on your device. For more information, see [Configure MTU via the local UI](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=single-node#configure-virtual-switches).
+- In this release, new VM sizes have been added. For more information, see the [Supported VM sizes for Azure Stack Edge](azure-stack-edge-gpu-virtual-machine-sizes.md).
+- Starting this release, you can created a VM image with custom sizes and use that to create your VMs. For more information, see [Create a VM image with custom size](azure-stack-edge-create-vm-with-custom-size.md).
+- Beginning this release, you can create VM images starting from an image from Azure Marketplace or an image in your Storage account. For more information, see [Create a VM image from Azure Marketplace or Azure storage account](azure-stack-edge-create-a-vm-from-azure-marketplace.md).
+- Several security, supportability, diagnostics, resiliency, and performance improvements were made in this release.
- You can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md).
+- In this release, a precheck that verifies if the Azure Resource Manager certificate has expired, was added to the Azure Kubernetes Service (AKS) update.
+- The `Set-HcsKubernetesAzureMonitorConfiguration` PowerShell cmdlet that enables the Azure Monitor is fixed in this release. Though the cmdlet is available to use, we recommend that you configure Azure Monitor for Azure Stack Edge via the Azure Arc portal.
+- Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible.
+ ## Issues fixed in this release | No. | Feature | Issue | | | | | |**1.**|Core Azure Stack Edge platform and Azure Kubernetes Service (AKS) on Azure Stack Edge |Critical bug fixes to improve workload availability during two-node Azure Stack Edge update of core Azure Stack Edge platform and AKS on Azure Stack Edge. |
+|**2.**|Virtual machines and virtual network |In previous releases, there were VM provisioning timeouts when virtual network was created on a port that supported accelerated networking and the primary network interface associated with the VM was also attached to the same virtual network. <br><br>A fix was incorporated in this release that always turned off accelerated networking on the primary network interface for the VM. |
+|**3.**|Virtual machines and virtual network |In the earlier releases, the primary network interface on the VM was not validated to have a reachable gateway IP. <br><br>In this release, this issue is fixed. The validation of the gateway IP helps identify any potential network configuration issues before the VM provision timeout occurs. |
+|**4.**|Virtual machines and virtual network |In the earlier releases, the MAC address allocation algorithm only considered the last two octets whereas the MAC address range actually spanned last three octets. This discrepancy led to allocation conflicts in certain corner cases, resulting in overlapping MAC addresses. <br><br>The MAC address allocation is revised in this release to fix the above issue.|
+|**5.**|Azure Kubernetes Service (AKS) |In previous releases, if there was a host power failure, the pod with SRIOV capable CNIs also failed with the following error: <br><br>`Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "<GUID>": plugin type="multus" name="multus-cni-network" failed (add): [core/core-upf-pp-0:n3-dpdk]: error adding container to network "n3-dpdk": failed to rename netvsc link: failed to rename link "001dd81c9dd4_VF" to "001dd81c9dd4": file exists.`<br><br>This failure of pod with SRIOV capable CNIs is fixed in this release. For AKS SRIOV CNI plugin, the driver name returned from ethtool is used to determine device is VF or netvsc. |
<!--## Known issues in this release
databox-online Azure Stack Edge Gpu Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md
Previously updated : 09/28/2023 Last updated : 10/01/2023 # Manage an Azure Stack Edge Pro GPU device via Windows PowerShell
After virtual switches are created, you can enable the switches for Kubernetes c
1. [Connect to the PowerShell interface](#connect-to-the-powershell-interface). 2. Use the `Get-HcsApplianceInfo` cmdlet to get current `KubernetesPlatform` and `KubernetesWorkloadProfile` settings for your device.
+3. Use the `Get-HcsKubernetesWorkloadProfiles` cmdlet to identify the profiles available on your Azure Stack Edge device.
- The following example shows the usage of this cmdlet:
-
- ```powershell
- Get-HcsApplianceInfo
- ```
-
-3. Use the `Set-HcsKubernetesWorkloadProfile` cmdlet to set the workload profile for AP5GC an Azure Private MEC solution.
+ ```powershell
+ [Device-IP]: PS>Get-HcsKubernetesWorkloadProfiles
+ Type Description
+ - --
+ AP5GC an Azure Private MEC solution
+ SAP a SAP Digital Manufacturing for Edge Computing or another Microsoft partner solution
+ NONE other workloads
+ [Device-IP]: PS>
+ ```
+
+4. Use the `Set-HcsKubernetesWorkloadProfile` cmdlet to set the workload profile for AP5GC, an Azure Private MEC solution.
The following example shows the usage of this cmdlet: ```powershell Set-HcsKubernetesWorkloadProfile -Type "AP5GC" ```- Here is sample output for this cmdlet: ```powershell
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 09/22/2023 Last updated : 10/02/2023 # Update your Azure Stack Edge Pro GPU
This article describes the steps required to install update on your Azure Stack Edge Pro with GPU via the local web UI and via the Azure portal. You apply the software updates or hotfixes to keep your Azure Stack Edge Pro device and the associated Kubernetes cluster on the device up-to-date.
-The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version.
+> [!NOTE]
+> The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version.
## About latest updates
The current update is Update 2309. This update installs two updates, the device
The associated versions for this update are: -- Device software version: Azure Stack Edge 2309 (3.2.2380.1652)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2309 (3.2.2380.1652)
+- Device software version: Azure Stack Edge 2309 (3.2.2380.1632)
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2309 (3.2.2380.1632)
- Kubernetes server version: v1.25.5 - IoT Edge version: 0.1.0-beta15 - Azure Arc version: 1.11.7-- GPU driver version: 530.30.02-- CUDA version: 12.1
+- GPU driver version for Kubernetes for Azure Stack Edge: 530.30.02
+- GPU driver version for Azure Kubernetes Service (AKS): 525.85.12
+- CUDA version for Kubernetes for Azure Stack Edge: 12.1
+- CUDA version for Azure Kubernetes Service (AKS): 12.0
For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2309-release-notes.md).
Use the following steps to update your Azure Stack Edge version and Kubernetes v
If you are running 2210 or 2301, you can update both your device version and Kubernetes version directly to 2303 and then to 2309.
-If you are running 2303, you can update both your device version and Kubernetes version directly to 2309.
+If you are running 2303, you can update both your device version and Kubernetes version directly to
+2309.
In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2309.
Depending on the software version that you are running, install process may diff
[!INCLUDE [azure-stack-edge-install-2110-updates](../../includes/azure-stack-edge-install-2110-updates.md)]
+![Screenshot of updated software version in local UI.](./media/azure-stack-edge-gpu-install-update/portal-update-17.png)
+ ### [version 2105 and earlier](#tab/version-2105-and-earlier) 1. When the updates are available for your device, you see a notification in the **Overview** page of your Azure Stack Edge resource. Select the notification or from the top command bar, **Update device**. This will allow you to apply device software updates.
Do the following steps to download the update from the Microsoft Update Catalog.
![Search catalog.](./media/azure-stack-edge-gpu-install-update/download-update-1.png)
-2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**.
+1. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**.
The update listing appears as **Azure Stack Edge Update 2309**.
- Specify the update package for your environment:
+ > [!NOTE]
+ > Make sure to verify which workload you are running on your device [via the local UI](./azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-compute-ips-1) or [via the PowerShell](./azure-stack-edge-connect-powershell-interface.md) interface of the device. Depending on the workload that you are running, the update package will differ.
+
+ Specify the update package for your environment. Use the following table as a reference:
+
+ | Kubernetes | Local UI Kubernetes workload profile | Update package name | Example Update File |
+ ||--||--|
+ | Azure Kubernetes Service | Azure Private MEC Solution in your environment<br><br>SAP Digital Manufacturing for Edge Computing or another Microsoft Partner Solution in your Environment | Azure Stack Edge Update 2309 Kubernetes Package for Private MEC/SAP Workloads | release~ase-2307d.3.2.2380.1632-42623-79365624-release_host_MsKubernetes_Package |
+ | Kubernetes for Azure Stack Edge |Other workloads in your environment | Azure Stack Edge Update 2309 Kubernetes Package for Non Private MEC/Non SAP Workloads | \release~ase-2307d.3.2.2380.1632-42623-79365624-release_host_AseKubernetes_Package |
- - Azure Stack Edge Update 2309 Software Package.
- - host update .exe
- - Azure Stack Edge Update 2309 Kubernetes Package for Private MEC/SAP Workloads.
- - msk8.0.exe
- - msk8.1.exe
- - Azure Stack Edge Update 2309 Kubernetes Package for Non Private MEC/Non SAP Workloads.
- - asek8.0.exe
- - asek8.1.exe
-
- <!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
-4. Select **Download**. There are two packages to download for the update. The first package will have two files for the device software updates (*SoftwareUpdatePackage.0.exe*, *SoftwareUpdatePackage.1.exe*) and the second package has three files for the Kubernetes updates (*Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
+1. Select **Download**. There are two packages to download for the update. The first package will have two files for the device software updates (*SoftwareUpdatePackage.0.exe*, *SoftwareUpdatePackage.1.exe*) and the second package has two files for the Kubernetes updates (*Kubernetes_Package.0.exe* and *Kubernetes_Package.1.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
### Install the update or the hotfix
This procedure takes around 20 minutes to complete. Perform the following steps
1. In the local web UI, go to **Maintenance** > **Software update**. Make a note of the software version that you are running.
- ![update device 2.](./media/azure-stack-edge-gpu-install-update/local-ui-update-2.png)
2. Provide the path to the update file. You can also browse to the update installation file if placed on a network share. Select the two software files (with *SoftwareUpdatePackage.0.exe* and *SoftwareUpdatePackage.1.exe* suffix) together.
- ![Screenshot of files selected for the device software update.](./media/azure-stack-edge-gpu-install-update/local-ui-update-3-a.png)
+ <!--![Screenshot of files selected for the device software update.](./media/azure-stack-edge-gpu-install-update/local-ui-update-3-a.png)-->
3. Select **Apply update**.
This procedure takes around 20 minutes to complete. Perform the following steps
6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2309**.
-7. You will now update the Kubernetes software version. Select the remaining three Kubernetes files together (file with the *Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe* suffix) and repeat the above steps to apply update.
+7. You will now update the Kubernetes software version. Select the remaining two Kubernetes files together (file with the *Kubernetes_Package.0.exe* and *Kubernetes_Package.1.exe* suffix) and repeat the above steps to apply update.
- ![Screenshot of files selected for the Kubernetes update.](./media/azure-stack-edge-gpu-install-update/local-ui-update-7.png)
+ <!--![Screenshot of files selected for the Kubernetes update.](./media/azure-stack-edge-gpu-install-update/local-ui-update-7.png)-->
8. Select **Apply Update**.
databox-online Azure Stack Edge Gpu Virtual Machine Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
Previously updated : 07/11/2023 Last updated : 10/02/2023 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
defender-for-cloud Tutorial Enable Storage Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-storage-plan.md
Last updated 08/21/2023
# Deploy Microsoft Defender for Storage
-Microsoft Defender for Storage is an Azure-native solution offering an advanced layer of intelligence for threat detection and mitigation in storage accounts, powered by Microsoft Threat Intelligence, Microsoft Defender Antimalware technologies, and Sensitive Data Discovery. With protection for Azure Blob Storage, Azure Files, and Azure Data Lake Storage services, it provides a comprehensive alert suite, near real-time Malware Scanning (add-on), and sensitive data threat detection (no extra cost), allowing quick detection, triage, and response to potential security threats with contextual information. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption.
+Microsoft Defender for Storage is an Azure-native solution offering an advanced layer of intelligence for threat detection and mitigation in storage accounts, powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684), Microsoft Defender Antimalware technologies, and Sensitive Data Discovery. With protection for Azure Blob Storage, Azure Files, and Azure Data Lake Storage services, it provides a comprehensive alert suite, near real-time Malware Scanning (add-on), and sensitive data threat detection (no extra cost), allowing quick detection, triage, and response to potential security threats with contextual information. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption.
With Microsoft Defender for Storage, organizations can customize their protection and enforce consistent security policies by enabling it on subscriptions and storage accounts with granular control and flexibility.
-Defender for Storage in Microsoft Defender for Cloud is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
- > [!TIP] > If you're currently using Microsoft Defender for Storage classic, consider [migrating to the new plan](defender-for-storage-classic-migrate.md), which offers several benefits over the classic plan.
Enabling Defender for Storage via a policy is recommended because it facilitates
- Learn how to [enable and Configure the Defender for Storage plan at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md). +
firewall Firewall Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-known-issues.md
+
+ Title: Azure Firewall known issues
+description: Learn about Azure Firewall known issues.
++++ Last updated : 10/02/2023+++
+# Azure Firewall known issues
+
+This article list the known issues for [Azure Firewall](overview.md). It is updated as issues are resolved.
+
+## Azure Firewall Standard
+
+Azure Firewall Standard has the following known issues:
+
+> [!NOTE]
+> Any issue that applies to Standard also applies to Premium.
+
+|Issue |Description |Mitigation |
+||||
+|Network filtering rules for non-TCP/UDP protocols (for example ICMP) don't work for Internet bound traffic|Network filtering rules for non-TCP/UDP protocols don't work with SNAT to your public IP address. Non-TCP/UDP protocols are supported between spoke subnets and VNets.|Azure Firewall uses the Standard Load Balancer, [which doesn't support SNAT for IP protocols today](../load-balancer/outbound-rules.md#limitations). We're exploring options to support this scenario in a future release.|
+|Missing PowerShell and CLI support for ICMP|Azure PowerShell and CLI don't support ICMP as a valid protocol in network rules.|It's still possible to use ICMP as a protocol via the portal and the REST API. We're working to add ICMP in PowerShell and CLI soon.|
+|FQDN tags require a protocol: port to be set|Application rules with FQDN tags require port: protocol definition.|You can use **https** as the port: protocol value. We're working to make this field optional when FQDN tags are used.|
+|Moving a firewall to a different resource group or subscription isn't supported|Moving a firewall to a different resource group or subscription isn't supported.|Supporting this functionality is on our road map. To move a firewall to a different resource group or subscription, you must delete the current instance and recreate it in the new resource group or subscription.|
+|Threat intelligence alerts may get masked|Network rules with destination 80/443 for outbound filtering masks threat intelligence alerts when configured to alert only mode.|Create outbound filtering for 80/443 using application rules. Or, change the threat intelligence mode to **Alert and Deny**.|
+|Azure Firewall DNAT doesn't work for private IP destinations|Azure Firewall DNAT support is limited to Internet egress/ingress. DNAT doesn't currently work for private IP destinations. For example, spoke to spoke.|A fix is being investigated.|
+|With secured virtual hubs, availability zones can only be configured during deployment.| You can't configure Availability Zones after a firewall with secured virtual hubs has been deployed.|This is by design.|
+|SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall.
+|SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using nonstandard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules.
+|Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 can be blocked by Azure platform. This is the default platform behavior in Azure, Azure Firewall doesn't introduce any more specific restriction. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). Currently, Azure Firewall may be able to communicate to public IPs by using outbound TCP 25, but it's not guaranteed to work, and it's not supported for all subscription types. For private IPs like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection of TCP port 25.
+|SNAT port exhaustion|Azure Firewall currently supports 2496 ports per Public IP address per backend Virtual Machine Scale Set instance. By default, there are two Virtual Machine Scale Set instances. So, there are 4992 ports per flow (destination IP, destination port and protocol (TCP or UDP). The firewall scales up to a maximum of 20 instances. |This is a platform limitation. You can work around the limits by configuring Azure Firewall deployments with a minimum of five public IP addresses for deployments susceptible to SNAT exhaustion. This increases the SNAT ports available by five times. Allocate from an IP address prefix to simplify downstream permissions. For a more permanent solution, you can deploy a NAT gateway to overcome the SNAT port limits. This approach is supported for virtual network deployments. <br /><br /> For more information, see [Scale SNAT ports with Azure Virtual Network NAT](integrate-with-nat-gateway.md).|
+|DNAT isn't supported with Forced Tunneling enabled|Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing.|This is by design because of asymmetric routing. The return path for inbound connections goes via the on-premises firewall, which hasn't seen the connection established.
+|Outbound Passive FTP may not work for Firewalls with multiple public IP addresses, depending on your FTP server configuration.|Passive FTP establishes different connections for control and data channels. When a Firewall with multiple public IP addresses sends data outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|An explicit SNAT configuration is planned. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses (see [an example for IIS](/iis/configuration/system.applicationhost/sites/sitedefaults/ftpserver/security/datachannelsecurity)). Alternatively, consider using a single IP address in this situation.|
+|Inbound Passive FTP may not work depending on your FTP server configuration |Passive FTP establishes different connections for control and data channels. Inbound connections on Azure Firewall are SNATed to one of the firewall private IP addresses to ensure symmetric routing. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|Preserving the original source IP address is being investigated. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses.|
+|Active FTP doesn't work when the FTP client must reach an FTP server across the internet.|Active FTP utilizes a PORT command from the FTP client that directs the FTP server what IP and port to use for the data channel. This PORT command utilizes the private IP of the client that can't be changed. Client-side traffic traversing the Azure Firewall is NATed for Internet-based communications, making the PORT command seen as invalid by the FTP server.|This is a general limitation of Active FTP when used with client-side NAT.|
+|NetworkRuleHit metric is missing a protocol dimension|The ApplicationRuleHit metric allows filtering based protocol, but this capability is missing in the corresponding NetworkRuleHit metric.|A fix is being investigated.|
+|NAT rules with ports between 64000 and 65535 are unsupported|Azure Firewall allows any port in the 1-65535 range in network and application rules, however NAT rules only support ports in the 1-63999 range.|This is a current limitation.
+|Configuration updates may take five minutes on average|An Azure Firewall configuration update can take three to five minutes on average, and parallel updates aren't supported.|A fix is being investigated.|
+|Azure Firewall uses SNI TLS headers to filter HTTPS and MSSQL traffic|If browser or server software doesn't support the Server Name Indicator (SNI) extension, you can't connect through Azure Firewall.|If browser or server software doesn't support SNI, then you may be able to control the connection using a network rule instead of an application rule. See [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication) for software that supports SNI.|
+|Can't add firewall policy tags using the portal or Azure Resource Manager (ARM) templates|Azure Firewall Policy has a patch support limitation that prevents you from adding a tag using the Azure portal or ARM templates. The following error is generated: *Couldn't save the tags for the resource*.|A fix is being investigated. Or, you can use the Azure PowerShell cmdlet `Set-AzFirewallPolicy` to update tags.|
+|IPv6 not currently supported|If you add an IPv6 address to a rule, the firewall fails.|Use only IPv4 addresses. IPv6 support is under investigation.|
+|Updating multiple IP Groups fails with conflict error.|When you update two or more IP Groups attached to the same firewall, one of the resources goes into a failed state.|This is a known issue/limitation. <br><br>When you update an IP Group, it triggers an update on all firewalls that the IPGroup is attached to. If an update to a second IP Group is started while the firewall is still in the *Updating* state, then the IPGroup update fails.<br><br>To avoid the failure, IP Groups attached to the same firewall must be updated one at a time. Allow enough time between updates to allow the firewall to get out of the *Updating* state.|
+|Removing RuleCollectionGroups using ARM templates not supported.|Removing a RuleCollectionGroup using ARM templates isn't supported and results in failure.|This isn't a supported operation.|
+|DNAT rule for allow *any* (*) will SNAT traffic.|If a DNAT rule allows *any* (*) as the Source IP address, then an implicit Network rule matches VNet-VNet traffic and will always SNAT the traffic.|This is a current limitation.|
+|Adding a DNAT rule to a secured virtual hub with a security provider isn't supported.|This results in an asynchronous route for the returning DNAT traffic, which goes to the security provider.|Not supported.|
+| Error encountered when creating more than 2000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. |
+|XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.|
+|Can't upgrade to Premium with Availability Zones in the Southeast Asia region|You can't currently upgrade to Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy a new Premium firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
+|CanΓÇÖt deploy Firewall with Availability Zones with a newly created Public IP address|When you deploy a Firewall with Availability Zones, you canΓÇÖt use a newly created Public IP address.|First create a new zone redundant Public IP address, then assign this previously created IP address during the Firewall deployment.|
+|Azure private DNS zone isn't supported with Azure Firewall|Azure private DNS zone doesn't work with Azure Firewall regardless of Azure Firewall DNS settings.|To achieve the desire state of using a private DNS server, use Azure Firewall DNS proxy instead of an Azure private DNS zone.|
+
+## Azure Firewall Premium
+
+Azure Firewall Premium has the following known issues:
++
+|Issue |Description |Mitigation |
+||||
+|ESNI support for FQDN resolution in HTTPS|Encrypted SNI isn't supported in HTTPS handshake.|Today only Firefox supports ESNI through custom configuration. Suggested workaround is to disable this feature.|
+|Client Certification Authentication isn't supported|Client certificates are used to build a mutual identity trust between the client and the server. Client certificates are used during a TLS negotiation. Azure firewall renegotiates a connection with the server and has no access to the private key of the client certificates.|None|
+|QUIC/HTTP3|QUIC is the new major version of HTTP. It's a UDP-based protocol over 80 (PLAN) and 443 (SSL). FQDN/URL/TLS inspection won't be supported.|Configure passing UDP 80/443 as network rules.|
+Untrusted customer signed certificates|Customer signed certificates aren't trusted by the firewall once received from an intranet-based web server.|A fix is being investigated.
+|Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is a public IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|A fix is being investigated.|
+|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.|
+|TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.|
+|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
+|TLSi intermediate CA certificate expiration|In some unique cases, the intermediate CA certificate can expire two months before the original expiration date.|Renew the intermediate CA certificate two months before the original expiration date. A fix is being investigated.|
+
+## Next steps
+
+- [Azure Firewall Premium features](premium-features.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
To learn what's new with Azure Firewall, see [Azure updates](https://azure.micro
## Known issues
-### Azure Firewall Standard
-
-Azure Firewall Standard has the following known issues:
-
-> [!NOTE]
-> Any issue that applies to Standard also applies to Premium.
-
-|Issue |Description |Mitigation |
-||||
-|Network filtering rules for non-TCP/UDP protocols (for example ICMP) don't work for Internet bound traffic|Network filtering rules for non-TCP/UDP protocols don't work with SNAT to your public IP address. Non-TCP/UDP protocols are supported between spoke subnets and VNets.|Azure Firewall uses the Standard Load Balancer, [which doesn't support SNAT for IP protocols today](../load-balancer/outbound-rules.md#limitations). We're exploring options to support this scenario in a future release.|
-|Missing PowerShell and CLI support for ICMP|Azure PowerShell and CLI don't support ICMP as a valid protocol in network rules.|It's still possible to use ICMP as a protocol via the portal and the REST API. We're working to add ICMP in PowerShell and CLI soon.|
-|FQDN tags require a protocol: port to be set|Application rules with FQDN tags require port: protocol definition.|You can use **https** as the port: protocol value. We're working to make this field optional when FQDN tags are used.|
-|Moving a firewall to a different resource group or subscription isn't supported|Moving a firewall to a different resource group or subscription isn't supported.|Supporting this functionality is on our road map. To move a firewall to a different resource group or subscription, you must delete the current instance and recreate it in the new resource group or subscription.|
-|Threat intelligence alerts may get masked|Network rules with destination 80/443 for outbound filtering masks threat intelligence alerts when configured to alert only mode.|Create outbound filtering for 80/443 using application rules. Or, change the threat intelligence mode to **Alert and Deny**.|
-|Azure Firewall DNAT doesn't work for private IP destinations|Azure Firewall DNAT support is limited to Internet egress/ingress. DNAT doesn't currently work for private IP destinations. For example, spoke to spoke.|A fix is being investigated.|
-|With secured virtual hubs, availability zones can only be configured during deployment.| You can't configure Availability Zones after a firewall with secured virtual hubs has been deployed.|This is by design.|
-|SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall.
-|SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules.
-|Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 can be blocked by Azure platform. This is the default platform behavior in Azure, Azure Firewall doesn't introduce any more specific restriction. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). Currently, Azure Firewall may be able to communicate to public IPs by using outbound TCP 25, but it's not guaranteed to work, and it's not supported for all subscription types. For private IPs like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection of TCP port 25.
-|SNAT port exhaustion|Azure Firewall currently supports 2496 ports per Public IP address per backend Virtual Machine Scale Set instance. By default, there are two Virtual Machine Scale Set instances. So, there are 4992 ports per flow (destination IP, destination port and protocol (TCP or UDP). The firewall scales up to a maximum of 20 instances. |This is a platform limitation. You can work around the limits by configuring Azure Firewall deployments with a minimum of five public IP addresses for deployments susceptible to SNAT exhaustion. This increases the SNAT ports available by five times. Allocate from an IP address prefix to simplify downstream permissions. For a more permanent solution, you can deploy a NAT gateway to overcome the SNAT port limits. This approach is supported for VNET deployments. <br /><br /> For more information, see [Scale SNAT ports with Azure Virtual Network NAT](integrate-with-nat-gateway.md).|
-|DNAT isn't supported with Forced Tunneling enabled|Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing.|This is by design because of asymmetric routing. The return path for inbound connections goes via the on-premises firewall, which hasn't seen the connection established.
-|Outbound Passive FTP may not work for Firewalls with multiple public IP addresses, depending on your FTP server configuration.|Passive FTP establishes different connections for control and data channels. When a Firewall with multiple public IP addresses sends data outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|An explicit SNAT configuration is planned. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses (see [an example for IIS](/iis/configuration/system.applicationhost/sites/sitedefaults/ftpserver/security/datachannelsecurity)). Alternatively, consider using a single IP address in this situation.|
-|Inbound Passive FTP may not work depending on your FTP server configuration |Passive FTP establishes different connections for control and data channels. Inbound connections on Azure Firewall are SNATed to one of the firewall private IP addresses to ensure symmetric routing. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|Preserving the original source IP address is being investigated. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses.|
-|Active FTP won't work when the FTP client must reach an FTP server across the internet.|Active FTP utilizes a PORT command from the FTP client that directs the FTP server what IP and port to use for the data channel. This PORT command utilizes the private IP of the client that can't be changed. Client-side traffic traversing the Azure Firewall will be NAT for Internet-based communications, making the PORT command seen as invalid by the FTP server.|This is a general limitation of Active FTP when used with client-side NAT.|
-|NetworkRuleHit metric is missing a protocol dimension|The ApplicationRuleHit metric allows filtering based protocol, but this capability is missing in the corresponding NetworkRuleHit metric.|A fix is being investigated.|
-|NAT rules with ports between 64000 and 65535 are unsupported|Azure Firewall allows any port in the 1-65535 range in network and application rules, however NAT rules only support ports in the 1-63999 range.|This is a current limitation.
-|Configuration updates may take five minutes on average|An Azure Firewall configuration update can take three to five minutes on average, and parallel updates aren't supported.|A fix is being investigated.|
-|Azure Firewall uses SNI TLS headers to filter HTTPS and MSSQL traffic|If browser or server software doesn't support the Server Name Indicator (SNI) extension, you can't connect through Azure Firewall.|If browser or server software doesn't support SNI, then you may be able to control the connection using a network rule instead of an application rule. See [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication) for software that supports SNI.|
-|Can't add firewall policy tags using the portal or Azure Resource Manager (ARM) templates|Azure Firewall Policy has a patch support limitation that prevents you from adding a tag using the Azure portal or ARM templates. The following error is generated: *Couldn't save the tags for the resource*.|A fix is being investigated. Or, you can use the Azure PowerShell cmdlet `Set-AzFirewallPolicy` to update tags.|
-|IPv6 not currently supported|If you add an IPv6 address to a rule, the firewall fails.|Use only IPv4 addresses. IPv6 support is under investigation.|
-|Updating multiple IP Groups fails with conflict error.|When you update two or more IP Groups attached to the same firewall, one of the resources goes into a failed state.|This is a known issue/limitation. <br><br>When you update an IP Group, it triggers an update on all firewalls that the IPGroup is attached to. If an update to a second IP Group is started while the firewall is still in the *Updating* state, then the IPGroup update fails.<br><br>To avoid the failure, IP Groups attached to the same firewall must be updated one at a time. Allow enough time between updates to allow the firewall to get out of the *Updating* state.|
-|Removing RuleCollectionGroups using ARM templates not supported.|Removing a RuleCollectionGroup using ARM templates isn't supported and results in failure.|This isn't a supported operation.|
-|DNAT rule for allow *any* (*) will SNAT traffic.|If a DNAT rule allows *any* (*) as the Source IP address, then an implicit Network rule matches VNet-VNet traffic and will always SNAT the traffic.|This is a current limitation.|
-|Adding a DNAT rule to a secured virtual hub with a security provider isn't supported.|This results in an asynchronous route for the returning DNAT traffic, which goes to the security provider.|Not supported.|
-| Error encountered when creating more than 2000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. |
-|XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.|
-|Can't upgrade to Premium with Availability Zones in the Southeast Asia region|You can't currently upgrade to Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy a new Premium firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
-|CanΓÇÖt deploy Firewall with Availability Zones with a newly created Public IP address|When you deploy a Firewall with Availability Zones, you canΓÇÖt use a newly created Public IP address.|First create a new zone redundant Public IP address, then assign this previously created IP address during the Firewall deployment.|
-|Azure private DNS zone isn't supported with Azure Firewall|Azure private DNS zone won't work with Azure Firewall regardless of Azure Firewall DNS settings.|To achieve the desire state of using a private DNS server, use Azure Firewall DNS proxy instead of an Azure private DNS zone.|
-
-### Azure Firewall Premium
-
-Azure Firewall Premium has the following known issues:
--
-|Issue |Description |Mitigation |
-||||
-|ESNI support for FQDN resolution in HTTPS|Encrypted SNI isn't supported in HTTPS handshake.|Today only Firefox supports ESNI through custom configuration. Suggested workaround is to disable this feature.|
-|Client Certification Authentication isn't supported|Client certificates are used to build a mutual identity trust between the client and the server. Client certificates are used during a TLS negotiation. Azure firewall renegotiates a connection with the server and has no access to the private key of the client certificates.|None|
-|QUIC/HTTP3|QUIC is the new major version of HTTP. It's a UDP-based protocol over 80 (PLAN) and 443 (SSL). FQDN/URL/TLS inspection won't be supported.|Configure passing UDP 80/443 as network rules.|
-Untrusted customer signed certificates|Customer signed certificates aren't trusted by the firewall once received from an intranet-based web server.|A fix is being investigated.
-|Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is a public IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|A fix is being investigated.|
-|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.|
-|TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.|
-|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
-|TLSi intermediate CA certificate expiration|In some unique cases, the intermediate CA certificate can expire two months before the original expiration date.|Renew the intermediate CA certificate two months before the original expiration date. A fix is being investigated.|
+For Azure Firewall known issues, see [Azure Firewall known issues](firewall-known-issues.md)
## Next steps
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
The IoT Central REST API lets you develop client applications that integrate wit
Every IoT Central REST API call requires an authorization header that IoT Central uses to determine the identity of the caller and the permissions that caller is granted within the application.
-This article describes the types of token you can use in the authorization header, and how to get them. Please note that service principals are the recommended method for access management for IoT Central REST APIs.
+This article describes the types of token you can use in the authorization header, and how to get them. Srvice principals are the recommended approach to access management for IoT Central REST APIs.
## Token types
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
The PowerShell module performs the following functions:
### Unsupported Scenarios - Basic Load Balancers with a Virtual Machine Scale Set backend pool member that is also a member of a backend pool on a different load balancer-- Basic Load Balancers with backend pool members that aren't a Virtual Machine Scale Set-- Basic Load Balancers with only empty backend pools - Basic Load Balancers with IPV6 frontend IP configurations-- Basic Load Balancers with a Virtual Machine Scale Set backend pool member configured with 'Flexible' orchestration mode - Basic Load Balancers with a Virtual Machine Scale Set backend pool member where one or more Virtual Machine Scale Set instances have ProtectFromScaleSetActions Instance Protection policies enabled - Migrating a Basic Load Balancer to an existing Standard Load Balancer
logic-apps Monitor Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps.md
You can view run history only for stateful workflows, not stateless workflows. T
1. On the **Run History** tab, select the run that you want to review.
- The run details view opens and shows the status for each step in the run.
+ The run details page opens and shows the status for each step in the run.
> [!TIP] >
You can view run history only for stateful workflows, not stateless workflows. T
## Rerun a workflow with same inputs
-You can rerun a previously finished workflow run using the same inputs that the run previously used by resubmitting the run to Azure Logic Apps.
+You can rerun a previously finished workflow with the same inputs that the workflow previously used by resubmitting the run to Azure Logic Apps. Completing this task creates and adds a new workflow run to your workflow's run history.
> [!NOTE] > > If your workflow has operations such as create or delete operations, resubmitting a run might
-> create duplicate data or try to delete data that no longer exists, resulting in an error.
+> create duplicate data or try to delete data that no longer exists, resulting in an error.
### [Consumption](#tab/consumption)
You can rerun a previously finished workflow run using the same inputs that the
> If the resubmitted run doesn't appear, on the **Runs history** pane toolbar, select **Refresh**. > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
-1. To review the resubmitted workflow run, on the **Runs history** tab, select that run.
+1. To review the inputs and outputs for the resubmitted workflow run, on the **Runs history** tab, select that run.
### [Standard](#tab/standard) You can rerun only stateful workflows, not stateless workflows. To enable run history for a stateless workflow, see [Enable run history for stateless workflows](create-single-tenant-workflows-azure-portal.md#enable-run-history-stateless).
+#### Rerun the entire workflow
+ 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. 1. On the workflow menu, select **Overview**. On the **Overview** page, select **Run History**, which shows the run history for the current workflow.
You can rerun only stateful workflows, not stateless workflows. To enable run hi
> If the resubmitted run doesn't appear, on the **Overview** page toolbar, select **Refresh**. > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+1. To review the inputs and outputs from the resubmitted workflow run, on the **Run History** tab, select that run.
+
+### Rerun from a specific action (preview)
+
+You can rerun a previously finished workflow starting at a specific action using the same inputs and outputs from the preceding actions. The resubmitted action and all subsequent actions run as usual. When the resubmitted actions finish, a new workflow run appears in your workflow's run history.
+
+> [!NOTE]
+>
+> This capability is in preview. For legal terms that apply to Azure features that
+> are in beta, preview, or otherwise not yet released into general availability, see
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of this capability might change before general availability (GA).
+
+The resubmit capability is available for all actions except for non-sequential and complex concurrency scenarios and per the following limitations:
+
+| Actions | Resubmit availability and limitations |
+|||
+| **Condition** action and actions in the **True** and **False** paths | - Yes for **Condition** action <br>- No for actions in the **True** and **False** paths |
+| **For each** action and all actions inside the loop | No for all actions |
+| **Switch** action and all actions in the **Default** path and **Case** paths | - Yes for **Switch** action <br>- No for actions in the **Default** path and **Case** paths |
+| **Until** action and all actions inside the loop | No for all actions |
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow.
+
+1. On the workflow menu, select **Overview**. On the **Overview** page, select **Run History**, which shows the run history for the current workflow.
+
+1. On the **Run History** tab, select the run that you want to resubmit.
+
+ The run details page opens and shows the status for each step in the run.
+
+1. In the run details page, find the action from where you want to resubmit the workflow run, open the shortcut menu, and select **Submit from this action**.
+
+ The run details page refreshes and shows the new run. All the operations that precede the resubmitted action show a lighter-colored status icon, representing reused inputs and outputs. The resubmitted action and subsequent actions show the usually colored status icons. For more information, see [Review workflow run history](#review-runs-history).
+
+ > [!TIP]
+ >
+ > If the run hasn't fully finished, on the run details page toolbar, select **Refresh**.
+ <a name="add-azure-alerts"></a>
machine-learning How To Use Serverless Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-serverless-compute.md
Serverless compute can be used to run command, sweep, AutoML, pipeline, distribu
* To further simplify job submission, you can skip the resources altogether. Azure Machine Learning defaults the instance count and chooses an instance type (VM size) based on factors like quota, cost, performance and disk size. * Lesser wait times before jobs start executing in some cases. * User identity and workspace user-assigned managed identity is supported for job submission.
-* With managed network isolation, you can streamline and automate your network isolation configuration.
+* With managed network isolation, you can streamline and automate your network isolation configuration. **Customer virtual network** support is coming soon
* Admin control through quota and Azure policies ## How to use serverless compute
machine-learning How To End To End Llmops With Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow.md
Before you can set up a Prompt flow project with Azure Machine Learning, you nee
:::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-settings.png" alt-text="Screenshot of the GitHub menu bar on a GitHub project with settings selected. " lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-settings.png":::
-1. Then select **Secrets**, then **Actions**:
+1. Then select **Secrets and variables**, then **Actions**:
:::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-secrets.png" alt-text="Screenshot of on GitHub showing the security settings with security and actions highlighted." lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-secrets.png":::
notification-hubs Notification Hubs High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-high-availability.md
Previously updated : 09/11/2023 Last updated : 10/02/2023
Android, Windows, etc.) from any back-end (cloud or on-premises). This article d
> > - Availability zones >
-> Availability zones support will incur an additional cost on top of existing tier pricing. Starting October 9th 2023, you are automatically billed.
+> Availability zones support incurs an additional cost on top of existing tier pricing. Starting October 27th 2023, you will be automatically billed.
Notification Hubs offers two availability configurations:
openshift Howto Infrastructure Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-infrastructure-nodes.md
In order for Azure VMs added to an ARO cluster to be recognized as infrastructur
- Standard_E4s_v5 - Standard_E8s_v5 - Standard_E16s_v5
- - Standard_E4as_v5
- - Standard_E8as_v5
- - Standard_E16as_v5
- There can be no more than three nodes. Any additional nodes are charged an OpenShift fee.
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
Previously updated : 08/21/2023 Last updated : 10/02/2023 #
Last updated 08/21/2023
This how-to guide explains the steps for installing the required az CLI and extensions required to interact with Operator Nexus. Installations of the following CLI extensions are required:
-`networkcloud` (for Microsoft.NetworkCloud APIs), `managednetworkfabric` (for Microsoft.ManagedNetworkFabric APIs) and `hybridaks` (for AKS-Hybrid APIs).
+`networkcloud` (for Microsoft.NetworkCloud APIs) and `managednetworkfabric` (for Microsoft.ManagedNetworkFabric APIs).
If you haven't already installed Azure CLI: [Install Azure CLI][installation-instruction]. The aka.ms links download the latest available version of the extension.
+For list of available versions, see [the extension release history][az-cli-networkcloud-cli-versions].
## Install `networkcloud` CLI extension -- Remove any previously installed version of the extension
+- Upgrade any previously installed version of the extension
```azurecli
- az extension remove --name networkcloud
+ az extension add --yes --upgrade --name networkcloud
``` - - Install and test the latest version of `networkcloud` CLI extension ```azurecli
If you haven't already installed Azure CLI: [Install Azure CLI][installation-ins
az networkcloud --help ```
-For list of available versions, see [the extension release history][az-cli-networkcloud-cli-versions].
-
-To install a specific version of the networkcloud CLI extension, add `--version` parameter to the command. For example, below installs 0.4.1
-
-```azurecli
-az extension add --name networkcloud --version 0.4.1
-```
- ## Install `managednetworkfabric` CLI extension -- Remove any previously installed version of the extension
+- Upgrade any previously installed version of the extension
```azurecli
- az extension remove --name managednetworkfabric
+ az extension add --yes --upgrade --name managednetworkfabric
``` - Install and test the `managednetworkfabric` CLI extension
az extension add --name networkcloud --version 0.4.1
az extension add --yes --upgrade --name k8s-extension az extension add --yes --upgrade --name k8s-configuration az extension add --yes --upgrade --name connectedmachine
- az extension add --yes --upgrade --name monitor-control-service --version 0.2.0
+ az extension add --yes --upgrade --name monitor-control-service
az extension add --yes --upgrade --name ssh az extension add --yes --upgrade --name connectedk8s ```
Name Version
-- - monitor-control-service 0.2.0 connectedmachine 0.6.0
-connectedk8s 1.4.0
-k8s-extension 1.4.2
-networkcloud 1.0.0
+connectedk8s 1.4.2
+k8s-extension 1.4.3
+networkcloud 1.1.0
k8s-configuration 1.7.0 managednetworkfabric 3.2.0 customlocation 0.1.3
-ssh 2.0.1
+ssh 2.0.2
``` <!-- LINKS - External -->
private-5g-core Azure Stack Edge Virtual Machine Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-virtual-machine-sizing.md
Previously updated : 01/27/2023 Last updated : 09/29/2023 # Azure Stack Edge virtual machine sizing
-The following table contains information about the VMs that Azure Private 5G Core (AP5GC) uses when running on an Azure Stack Edge (ASE) device. You can use this information, for example, to check how much space you'll have remaining on your ASE device after installing Azure Private 5G Core.
+The following table lists the hardware resources that Azure Private 5G Core (AP5GC) uses when running on supported Azure Stack Edge (ASE) devices.
-| VM detail | Flavor name | vCPUs | Memory (GB) | Disk size (GB) | VM function |
+| VM detail | Flavor name | vCPUs | Memory (GiB) | Disk size (GB) | VM function |
|||||||
-| Management Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 80 | Management Control Plane to create Kubernetes clusters |
-| AP5GC Cluster Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 128 | Control Plane of the Kubernetes cluster used for AP5GC |
-| AP5GC Cluster Node VM | Standard_F16s_HPN | 16 | 32 | Ephemeral - 128 </br> Persistent - 102 GB | AP5GC workload node |
-| Control plane upgrade reserve | | 4 | 4 | 0 | Used by ASE during upgrade of the control plane VM |
-| **Total requirements** | | **28** | **44** | **Ephemeral - 336** </br> **Persistent - 102** </br> **Total - 438** | |
+| Management Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 80 | Management Control Plane to create Kubernetes clusters |
+| AP5GC Cluster Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 128 | Control Plane of the Kubernetes cluster used for AP5GC |
+| AP5GC Cluster Node VM | Standard_F16s_HPN | 16 | 32 | Ephemeral - 128 </br> Persistent - 102 GB | AP5GC workload node |
+| Control plane upgrade reserve | | 0 (see note) | 4 | 0 | Used by ASE during upgrade of the control plane VM |
+| **Total requirements** | | **24** | **44** | **Ephemeral - 336** </br> **Persistent - 102** </br> **Total - 438** | |
-## Remaining usable resource on Azure Stack Edge Pro
+> [!NOTE]
+> An additional four vCPUs are used during ASE upgrade. We do not recommend reserving these additional vCPUs because the ASE control plane software can contend with other workloads.
-The following resources are available within ASE after deploying AP5GC. You can use these resources, for example, to deploy additional virtual machines or storage accounts.
+## Remaining usable resources
+
+The following table lists the resources available on supported ASE devices after deploying AP5GC. You can use these resources to deploy additional virtual machines or storage accounts, for example.
| Resource | Pro with GPU | Pro 2 - 64G2T | Pro 2 - 128G4T1GPU | Pro 2 - 256G6T2GPU | |-|--||--|--|
-| vCPUs | 12 | 4 | 4 | 4 |
-| Memory | 56 GB | 3 GB | 51 GB | 163 GB |
-| Storage | ~3.75 TB | ~280 GB | ~1.1 TB | ~2.0 TB |
+| vCPUs | 16 | 8 | 8 | 8 |
+| Memory | 52 GiB | 4 GiB | 52 GiB | 180 GiB |
+| Storage | ~ 3.75 TB | ~ 280 GB | ~ 1.1 TB | ~ 2.0 TB |
+
+For the full device specifications, see:
+
+- [Technical specifications and compliance for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-technical-specifications-compliance)
+- [Technical specifications and compliance for Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-technical-specifications-compliance)
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
To use Azure Private 5G Core, you need to register some additional resource prov
1. If your account has multiple subscriptions, make sure you are in the correct one: ```azurecli
- az account set ΓÇô-subscription <subscription_id>
+ az account set --subscription <subscription_id>
``` 1. Check the Azure CLI version:
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
The following capabilities must be present to allow user equipment (UEs) to atta
- There must be a RAN, sending and receiving the cellular signal, to all parts of the enterprise site that contain UEs needing service. - There must be a packet core instance connected to the RAN and to an upstream network. The packet core is responsible for authenticating the UE's SIMs as they connect across the RAN and request service from the network. It applies policy to the resulting data flows to and from the UEs; for example, to set a quality of service. - The RAN, packet core, and upstream network infrastructure must be connected via Ethernet so that they can pass IP traffic to one another.
+- The site hosting the packet core must have a continuous, high speed connection to the internet (100 Mbps minimum) to allow for service management, telemetry, diagnostics, and upgrades.
## Designing a private mobile network
private-5g-core Ue Usage Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/ue-usage-event-hub.md
When configured, AP5GC will send data usage reports per QoS flow level for all P
|**IP Address** |String |The UE's IP address.| |**Packet Core Control Plane ARM ID** |String |The identifier of the packet core control plane ARM associated with the UE.| |**Packet Core Data Plane ARM ID** |String |The identifier of the packet core data plane ARM associated with the UE.|
-|**ARP**|String|The address resolution protocol, including the: priority level, preemption capability and preemption vulnerability. See [5G quality of service (QoS) and QoS flows](policy-control.md#5g-quality-of-service-qos-and-qos-flows) for more information. |
+|**ARP**|Object|The Allocation and Retention Policy, including the: priority level, preemption capability and preemption vulnerability. See [5G quality of service (QoS) and QoS flows](policy-control.md#5g-quality-of-service-qos-and-qos-flows) for more information. |
|- **ArpPriorityLevel**|Int (1-15) |See **ARP** above.| |- **Preemption Capability**|String |See **ARP** above.| |- **Preemption Vulnerability**|String |See **ARP** above.|
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
The following regions currently support availability zones:
| Americas | Europe | Middle East | Africa | Asia Pacific | |||||| | Brazil South | France Central | Qatar Central | South Africa North | Australia East |
-| Canada Central | Italy North* | UAE North | | Central India |
+| Canada Central | Italy North | UAE North | | Central India |
| Central US | Germany West Central | Israel Central* | | Japan East | | East US | Norway East | | | Korea Central | | East US 2 | North Europe | | | Southeast Asia |
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
The table below lists Azure regions without a region pair:
| Qatar | Qatar Central | | Poland | Poland Central | | Israel | Israel Central (Coming soon)|
-| Italy | Italy North (Coming soon)|
+| Italy | Italy North|
| Austria | Austria East (Coming soon) | | Spain | Spain Central (Coming soon) | ## Next steps
search Hybrid Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-overview.md
This article explains the concepts, benefits, and limitations of hybrid search.
In Azure Cognitive Search, vector indexes containing embeddings can live alongside textual and numerical fields allowing you to issue hybrid full text and vector queries. Hybrid queries can take advantage of existing functionality like filtering, faceting, sorting, scoring profiles, and [semantic ranking](semantic-search-overview.md) in a single search request.
-Hybrid search combines results from both full text and vector queries, which use different ranking functions such as BM25 and cosine similarity. To present these results in a single ranked list, a method of merging the ranked result lists is needed.
+Hybrid search combines results from both full text and vector queries, which use different ranking functions such as BM25 and HNSW. A [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md) algorithm is used to merge results. The query response provides just one result set, using RRF to determine which matches are included.
## Structure of a hybrid query
-Hybrid search is predicated on having a search index that contains fields of various types, including plain text and numbers, geo coordinates for geospatial search, and vectors for a mathematical representation of a chunk of text or image, audio, and video. You can use almost all query capabilities in Cognitive Search with a vector query, except for client-side interactions such as autocomplete and suggestions.
+Hybrid search is predicated on having a search index that contains fields of various [data types](/rest/api/searchservice/supported-data-types), including plain text and numbers, geo coordinates for geospatial search, and vectors for a mathematical representation of a chunk of text or image, audio, and video. You can use almost all query capabilities in Cognitive Search with a vector query, except for client-side interactions such as autocomplete and suggestions.
A representative hybrid query might be as follows (notice the vector is trimmed for brevity):
A response from the above query might look like this:
## Benefits
-Hybrid search combines the strengths of vector search and keyword search. The advantage of vector search is finding information that's similar to your search query, even if there are no keyword matches in the inverted index. The advantage of keyword or full text search is precision, and the ability to apply semantic ranking that improves the quality of the initial results. Some scenarios, such as product codes, highly specialized jargon, dates, etc. can perform better with keyword search because it can identify exact matches.
+Hybrid search combines the strengths of vector search and keyword search. The advantage of vector search is finding information that's similar to your search query, even if there are no keyword matches in the inverted index. The advantage of keyword or full text search is precision, and the ability to apply semantic ranking that improves the quality of the initial results. Some scenarios - such as querying over product codes, highly specialized jargon, dates, and people's names - can perform better with keyword search because it can identify exact matches.
Benchmark testing on real-world and benchmark datasets indicates that hybrid retrieval with semantic ranking offers significant benefits in search relevance.
search Hybrid Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-ranking.md
RRF works by taking the search results from multiple methods, assigning a recipr
Here's a simple explanation of the RRF process:
-1. Obtain ranked search results from multiple queries executing in parallel for full text search and vector search.
+1. Obtain ranked search results from multiple queries executing in parallel.
-1. Assign reciprocal rank scores for result in each of the ranked lists. RRF generates a new **`@search.score`** for each match in each result set. For each document in the search results, we assign a reciprocal rank score based on its position in the list. The score is calculated as `1/(rank + k)`, where `rank` is the position of the document in the list, and `k` is a constant, which was experimentally observed to perform best if it's set to a small value like 60. **Note that this `k` value is a constant in the RRF algorithm and entirely separate from the `k` that controls the number of nearest neighbors.**
+1. Assign reciprocal rank scores for result in each of the ranked lists. RRF generates a new **`@search.score`** for each match in each result set. For each document in the search results, the engine assigns a reciprocal rank score based on its position in the list. The score is calculated as `1/(rank + k)`, where `rank` is the position of the document in the list, and `k` is a constant, which was experimentally observed to perform best if it's set to a small value like 60. **Note that this `k` value is a constant in the RRF algorithm and entirely separate from the `k` that controls the number of nearest neighbors.**
1. Combine scores. For each document, the engine sums the reciprocal rank scores obtained from each search system, producing a combined score for each document. 
-1. Rank documents based on combined scores and sort them. The resulting list is the fused ranking.
+1. The engine ranks documents based on combined scores and sorts them. The resulting list is the fused ranking.
Only fields marked as `searchable` in the index are used for scoring. Only fields marked as `retrievable`, or fields that are specified in `searchFields` in the query, are returned in search results, along with their search score.
The following chart identifies the scoring property returned on each match, algo
||--|-|-| | full-text search | `@search.score` | BM25 algorithm | No upper limit. | | vector search | `@search.score` | HNSW algorithm, using the similarity metric specified in the HNSW configuration. | 0.333 - 1.00 (Cosine), 0 to 1 for Euclidean and DotProduct. |
-| hybrid search | `@search.score` | RRF algorithm | Upper limit is only bounded by the number of queries being fused, with each query contributing a maximum of approximately 1 to the RRF score. |
+| hybrid search | `@search.score` | RRF algorithm | Upper limit is bounded by the number of queries being fused, with each query contributing a maximum of approximately 1 to the RRF score. For example, merging three queries would produce higher RRF scores than if only two search results are merged. |
| semantic ranking | `@search.rerankerScore` | Semantic ranking | 1.00 - 4.00 | Semantic ranking doesn't participate in RRF. Its score (`@search.rerankerScore`) is always reported separately in the query response. Semantic ranking can rerank full text and hybrid search results, assuming those results include fields having semantically rich content.
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
A high-level summary of the pattern looks like this:
+ Send the top ranked search results to the LLM. + Use the natural language understanding and reasoning capabilities of the LLM to generate a response to the initial prompt.
-Cognitive Search provides data to the LLM but doesn't train the model. In RAG architecture, there's no extra training. The LLM is pretrained using public data, but it generates responses that are augmented by information from the retriever.
+Cognitive search provides inputs to the LLM prompt, but doesn't train the model. In RAG architecture, there's no extra training. The LLM is pretrained using public data, but it generates responses that are augmented by information from the retriever.
RAG patterns that include Cognitive Search have the elements indicated in the following illustration.
The web app provides the user experience, providing the presentation, context, a
The app server or orchestrator is the integration code that coordinates the handoffs between information retrieval and the LLM. One option is to use [LangChain](https://python.langchain.com/docs/get_started/introduction) to coordinate the workflow. LangChain [integrates with Azure Cognitive Search](https://python.langchain.com/docs/integrations/retrievers/azure_cognitive_search), making it easier to include Cognitive Search as a [retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/) in your workflow.
-The information retrieval system provides the searchable index, query logic, and the payload (query response). The query is executed using the existing search engine in Cognitive Search, which can handle keyword (or term) and vector queries. The index is created in advance, based on a schema you define, and loaded with your content that's sourced from files, databases, or storage.
+The information retrieval system provides the searchable index, query logic, and the payload (query response). The search index can contain vectors or non-vector content. Although most samples and demos include vector fields, it's not a requirement. The query is executed using the existing search engine in Cognitive Search, which can handle keyword (or term) and vector queries. The index is created in advance, based on a schema you define, and loaded with your content that's sourced from files, databases, or storage.
The LLM receives the original prompt, plus the results from Cognitive Search. The LLM analyzes the results and formulates a response. If the LLM is ChatGPT, the user interaction might be a back and forth conversation. If you're using Davinci, the prompt might be a fully composed answer. An Azure solution most likely uses Azure OpenAI, but there's no hard dependency on this specific service.
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md
If you operate Microsoft Sentinel in a cross-subscription or cross-tenant scenar
## Next steps
-When using analytics rules to detect threats from Microsoft Sentinel, make sure that you enable all rules associated with your connected data sources in order to ensure full security coverage for your environment. The most efficient way to enable analytics rules is directly from the data connector page, which lists any related rules. For more information, see [Connect data sources](connect-data-sources.md).
+When using analytics rules to detect threats from Microsoft Sentinel, make sure you enable all rules associated with your connected data sources to ensure full security coverage for your environment.
-You can also push rules to Microsoft Sentinel via [API](/rest/api/securityinsights/) and [PowerShell](https://www.powershellgallery.com/packages/Az.SecurityInsights/0.1.0), although doing so requires additional effort. When using API or PowerShell, you must first export the rules to JSON before enabling the rules. API or PowerShell may be helpful when enabling rules in multiple instances of Microsoft Sentinel with identical settings in each instance.
+To automate rule enablement, push rules to Microsoft Sentinel via [API](/rest/api/securityinsights/) and [PowerShell](https://www.powershellgallery.com/packages/Az.SecurityInsights/0.1.0), although doing so requires additional effort. When using API or PowerShell, you must first export the rules to JSON before enabling the rules. API or PowerShell may be helpful when enabling rules in multiple instances of Microsoft Sentinel with identical settings in each instance.
For more information, see:
service-fabric Service Fabric Reverseproxy Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reverseproxy-setup.md
After you have a Resource Manager template, you can enable the reverse proxy wit
```json {
- "apiVersion": "2016-09-01",
+ "apiVersion": "2021-06-01",
"type": "Microsoft.ServiceFabric/clusters", "name": "[parameters('clusterName')]", "location": "[parameters('clusterLocation')]",
After you have a Resource Manager template, you can enable the reverse proxy wit
```json {
- "apiVersion": "2016-09-01",
+ "apiVersion": "2021-06-01",
"type": "Microsoft.ServiceFabric/clusters", "name": "[parameters('clusterName')]", "location": "[parameters('clusterLocation')]",
service-health Impacted Resources Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-planned-maintenance.md
+
+ Title: Resource impact from Azure planned maintenance events
+description: This article details where to find information from Azure Service Health about how Azure Planned Maintenance impact your resources.
+ Last updated : 9/29/2023++
+# Resource impact from Azure planned maintenance
+
+In support of the experience for viewing Impacted Resources, Service Health has enabled a new feature to:
+
+- Display resources that are impacted by a planned maintenance event.
+- Provide impacted resources information for planned maintenance via the Service Health Portal.
+
+This article details what is communicated to users and where they can view information about their impacted resources.
+
+>[!Note]
+>This feature will be rolled out in phases. Initially, impacted resources will only be shown for **SQL resources with advance customer notifications and rebootful updates for compute resources.** Planned maintenance impacted resources coverage will be expanded to other resource types and scenarios in the future.
+
+## Viewing Impacted Resources for Planned Maintenance Events on the Service Health Portal
+
+In the Azure portal, the **Impacted Resources** tab under **Service Health** > **Planned Maintenance** displays resources that are affected by a planned maintenance event. The following example of the Impacted Resources tab shows a planned maintenance event with impacted resources.
++
+Service Health provides the information below on resources impacted by a planned maintenance event:
+
+|Fields |Description |
+|||
+|**Resource Name**|Name of the resource impacted by the planned maintenance event|
+|**Resource Type**|Type of resource impacted by the planned maintenance event|
+|**Resource Group**|Resource group which contains the impacted resource|
+|**Region**|Region which contains the impacted resource|
+|**Subscription ID**|Unique ID for the subscription that contains the impacted resource|
+|**Action(*)**|Link to the apply update page during Self-Service window (only for rebootful updates on compute resources)|
+|**Self-serve Maintenance Due Date(*)**|Due date for Self-Service window during which the update can be applied by the user (only for rebootful updates on compute resources)|
+
+>[!Note]
+>Fields with an asterisk * are optional fields that are available depending on the resource type.
+++
+## Filters
+
+Customers can filter on the results using the below filters
+- Region
+- Subscription ID: All Subscription IDs the user has access to
+- Resource Type: All Resource types under the users subscriptions
++
+## Export to CSV
+
+The list of impacted resources can be exported to an excel file by clicking on this option.
++
+## Next steps
+- [Introduction to the Azure Service Health dashboard](service-health-overview.md)
+- [Introduction to Azure Resource Health](resource-health-overview.md)
+- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)
spring-apps Quickstart Deploy Event Driven App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app.md
zone_pivot_groups: spring-apps-plan-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
+ This article explains how to deploy a Spring Boot event-driven application to Azure Spring Apps. The sample project is an event-driven application that subscribes to a [Service Bus queue](../service-bus-messaging/service-bus-queues-topics-subscriptions.md#queues) named `lower-case`, and then handles the message and sends another message to another queue named `upper-case`. To make the app simple, message processing just converts the message to uppercase. The following diagram depicts this process:
spring-apps Quickstart Deploy Microservice Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-microservice-apps.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise
+**This article applies to:** ✔️ Basic/Standard
This article explains how to deploy microservice applications to Azure Spring Apps using the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices). The Pet Clinic sample demonstrates the microservice architecture pattern. The following diagram shows the architecture of the PetClinic application on Azure Spring Apps.
spring-apps Quickstart Deploy Restful Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-restful-api-app.md
+
+ Title: Quickstart - Deploy RESTful API application to Azure Spring Apps
+description: Learn how to deploy RESTful API application to Azure Spring Apps.
+++ Last updated : 10/02/2023++++
+# Quickstart: Deploy RESTful API application to Azure Spring Apps
+
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption and dedicated (Preview)
+
+This article describes how to deploy a RESTful API application protected by [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) to Azure Spring Apps. The sample project is a simplified version based on the [Simple Todo](https://github.com/Azure-Samples/ASA-Samples-Web-Application) web application, which only provides the backend service and uses Microsoft Entra ID to protect the RESTful APIs.
+
+These RESTful APIs are protected by applying role-based access control (RBAC). Anonymous users can't access any data and aren't allowed to control access for different users. Anonymous users only have the following three permissions:
+
+- Read: With this permission, a user can read the ToDo data.
+- Write: With this permission, a user can add or update the ToDo data.
+- Delete: With this permission, a user can delete the ToDo data.
+
+After the deployment is successful, you can view and test the APIs through the Swagger UI.
++
+The following diagram shows the architecture of the system:
+++
+## 1. Prerequisites
+
+### [Azure portal](#tab/Azure-portal)
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- A Microsoft Entra ID tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+- [curl](https://curl.se/download.html).
+
+### [Azure Developer CLI](#tab/Azure-Developer-CLI)
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- A Microsoft Entra ID tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+- [Azure Developer CLI (AZD)](https://aka.ms/azd-install), version 1.0.2 or higher.
+- [curl](https://curl.se/download.html).
++++
+## 5. Validate the app
+
+Now, you can access the RESTful API to see if it works.
+
+### Request an access token
+
+The RESTful APIs act as a resource server, which is protected by Microsoft Entra ID. Before acquiring an access token, you're required to register another application in Microsoft Entra ID and grant permissions to the client application, which is named `ToDoWeb`.
+
+#### Register the client application
+
+Use the following steps to register an application in Microsoft Entra ID, which is used to add the permissions for the `ToDo` app:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. If you have access to multiple tenants, use the **Directory + subscription** filter (:::image type="icon" source="media/quickstart-deploy-restful-api-app/portal-directory-subscription-filter.png" border="false":::) to select the tenant in which you want to register an application.
+
+1. Search for and select **Microsoft Entra ID**.
+
+1. Under **Manage**, select **App registrations** > **New registration**.
+
+1. Enter a name for your application in the **Name** field - for example, *ToDoWeb*. Users of your app might see this name, and you can change it later.
+
+1. For **Supported account types**, use the default value **Accounts in this organizational directory only**.
+
+1. Select **Register** to create the application.
+
+1. On the app **Overview** page, look for the **Application (client) ID** value, and then record it for later use. You need it to acquire an access token.
+
+1. Select **API permissions** > **Add a permission** > **My APIs**. Select the `ToDo` application that you registered earlier, and then select the **ToDo.Read**, **ToDo.Write**, and **ToDo.Delete** permissions. Select **Add permissions**.
+
+1. Select **Grant admin consent for \<your-tenant-name>** to grant admin consent for the permissions you added.
+
+ :::image type="content" source="media/quickstart-deploy-restful-api-app/api-permissions.png" alt-text="Screenshot of the Azure portal that shows the API permissions of a web application." lightbox="media/quickstart-deploy-restful-api-app/api-permissions.png":::
+
+#### Add user to access the RESTful APIs
+
+Use the following steps to create a member user in your Microsoft Entra ID tenant. Then, the user can manage the data of the `ToDo` application through RESTful APIs.
+
+1. Under **Manage**, select **Users** > **New user** > **Create new user**.
+
+1. On the **Create new user** page, enter the following information:
+
+ - **User principal name**: Enter a name for the user.
+ - **Display name**: Enter a display name for the user.
+ - **Password**: Copy the autogenerated password provided in the **Password** box.
+
+ > [!NOTE]
+ > New users must complete the first sign-in authentication and update their passwords, otherwise, you receive an `AADSTS50055: The password is expired` error when you get the access token.
+ >
+ > When a new user logs in, they receive an **Action Required** prompt. They can choose **Ask later** to skip the validation.
+
+1. Select **Review + create** to review your selections. Select **Create** to create the user.
+
+#### Update the OAuth2 configuration for Swagger UI authorization
+
+Use the following steps to update the OAuth2 configuration for Swagger UI authorization. Then, you can authorize users to acquire access tokens through the `ToDoWeb` app.
+
+1. Open the Azure Spring Apps instance in the Azure portal.
+
+1. Open your **Microsoft Entra ID** tenant in the Azure portal, and go to the registered `ToDoWeb` app.
+
+1. Under **Manage**, select **Authentication**, select **Add a platform**, and then select **Single-page application**.
+
+1. Use the format `<your-app-exposed-application-url-or-endpoint>/swagger-ui/oauth2-redirect.html` as the OAuth2 redirect URL in the **Redirect URIs** field - for example, `https://simple-todo-api.xxxxxxxx-xxxxxxxx.xxxxxx.azurecontainerapps.io/swagger-ui/oauth2-redirect.html` - and then select **Configure**.
+
+ :::image type="content" source="media/quickstart-deploy-restful-api-app/single-page-app-authentication.png" alt-text="Screenshot of the Azure portal that shows the Authentication page for Microsoft Entra ID." lightbox="media/quickstart-deploy-restful-api-app/single-page-app-authentication.png":::
+
+#### Obtain the access token
+
+Use the following steps to use [OAuth 2.0 authorization code flow](../active-directory/develop/v2-oauth2-auth-code-flow.md) method to obtain an access token with Microsoft Entra ID, then access the RESTful APIs of the `ToDo` app:
+
+1. Open the URL exposed by the app, then select **Authorize** to prepare the OAuth2 authentication.
+
+1. In the **Available authorizations** window, enter the client ID of the `ToDoWeb` app in the **client_id** field, select all the scopes for **Scopes** field, ignore the **client_secret** field, and then select **Authorize** to redirect to the Microsoft Entra ID sign-in page.
+
+After completing the sign in with the previous user, you're returned to the **Available authorizations** window.
+
+### Access the RESTful APIs
+
+Use the following steps to access the RESTful APIs of the `ToDo` app in the Swagger UI:
+
+1. Select the API **POST /api/simple-todo/lists** and then select **Try it out**. Enter the following request body, and then select **Execute** to create a ToDo list.
+
+ ```json
+ {
+ "name": "My List"
+ }
+ ```
+
+ After the execution is complete, you see the following **Response body**:
+
+ ```json
+ {
+ "id": "<ID-of-the-ToDo-list>",
+ "name": "My List",
+ "description": null
+ }
+ ```
+
+1. Select the API **POST /api/simple-todo/lists/{listId}/items** and then select **Try it out**. For **listId**, enter the ToDo list ID you created previously, enter the following request body, and then select **Execute** to create a ToDo item.
+
+ ```json
+ {
+ "name": "My first ToDo item",
+ "listId": "<ID-of-the-ToDo-list>",
+ "state": "todo"
+ }
+ ```
+
+ This action returns the following ToDo item:
+
+ ```json
+ {
+ "id": "<ID-of-the-ToDo-item>",
+ "listId": "<ID-of-the-ToDo-list>",
+ "name": "My first ToDo item",
+ "description": null,
+ "state": "todo",
+ "dueDate": "2023-07-11T13:59:24.9033069+08:00",
+ "completedDate": null
+ }
+ ```
+
+1. Select the API **GET /api/simple-todo/lists** and then select **Execute** to query ToDo lists. This action returns the following ToDo lists:
+
+ ```json
+ [
+ {
+ "id": "<ID-of-the-ToDo-list>",
+ "name": "My List",
+ "description": null
+ }
+ ]
+ ```
+
+1. Select the API **GET /api/simple-todo/lists/{listId}/items** and then select **Try it out**. For **listId**, enter the ToDo list ID you created previously, and then select **Execute** to query the ToDo items. This action returns the following ToDo item:
+
+ ```json
+ [
+ {
+ "id": "<ID-of-the-ToDo-item>",
+ "listId": "<ID-of-the-ToDo-list>",
+ "name": "My first ToDo item",
+ "description": null,
+ "state": "todo",
+ "dueDate": "2023-07-11T13:59:24.903307+08:00",
+ "completedDate": null
+ }
+ ]
+ ```
+
+1. Select the API **PUT /api/simple-todo/lists/{listId}/items/{itemId}** and then select **Try it out**. For **listId**, enter the ToDo list ID. For **itemId**, enter the ToDo item ID, enter the following request body, and then select **Execute** to update the ToDo item.
+
+ ```json
+ {
+ "id": "<ID-of-the-ToDo-item>",
+ "listId": "<ID-of-the-ToDo-list>",
+ "name": "My first ToDo item",
+ "description": "Updated description.",
+ "dueDate": "2023-07-11T13:59:24.903307+08:00",
+ "state": "inprogress"
+ }
+ ```
+
+ This action returns the following updated ToDo item:
+
+ ```json
+ {
+ "id": "<ID-of-the-ToDo-item>",
+ "listId": "<ID-of-the-ToDo-list>",
+ "name": "My first ToDo item",
+ "description": "Updated description.",
+ "state": "inprogress",
+ "dueDate": "2023-07-11T05:59:24.903307Z",
+ "completedDate": null
+ }
+ ```
+
+1. Select the API **DELETE /api/simple-todo/lists/{listId}/items/{itemId}** and then select **Try it out**. For **listId**, enter the ToDo list ID. For **itemId**, enter the ToDo item ID, and then select **Execute** to delete the ToDo item. You should see that the server response code is `204`.
++
+## 7. Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Deploy an event-driven application to Azure Spring Apps](./quickstart-deploy-event-driven-app-standard-consumption.md)
+
+> [!div class="nextstepaction"]
+> [Quickstart: Deploy microservice applications to Azure Spring Apps](./quickstart-deploy-microservice-apps.md)
+
+> [!div class="nextstepaction"]
+> [Structured application log for Azure Spring Apps](./structured-app-log.md)
+
+> [!div class="nextstepaction"]
+> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md)
+
+> [!div class="nextstepaction"]
+> [Use Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
+
+> [!div class="nextstepaction"]
+> [Automate application deployments to Azure Spring Apps](./how-to-cicd.md)
+
+For more information, see the following articles:
+
+- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+- [Spring on Azure](/azure/developer/java/spring/)
+- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md
zone_pivot_groups: spring-apps-plan-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
+ This quickstart shows how to deploy a Spring Boot web application to Azure Spring Apps. The sample project is a simple ToDo application to add tasks, mark when they're complete, and then delete them. The following screenshot shows the application: :::image type="content" source="./media/quickstart-deploy-web-app/todo-app.png" alt-text="Screenshot of a sample web application in Azure Spring Apps." lightbox="./media/quickstart-deploy-web-app/todo-app.png":::
The following diagram shows the architecture of the system:
::: zone pivot="sc-consumption-plan,sc-standard"
-This article provides the following options for deploying to Azure Spring Apps:
+This article describes the following options for creating resources and deploying them to Azure Spring Apps:
-- Azure portal - This is a more conventional way to create resources and deploy applications step by step. This approach is suitable for Spring developers who are using Azure cloud services for the first time.-- Azure Developer CLI: This is a more efficient way to automatically create resources and deploy applications through simple commands, and it covers application code and infrastructure as code files needed to provision the Azure resources. This approach is suitable for Spring developers who are familiar with Azure cloud services.
+- Azure portal: Use the Azure portal to create resources and deploy applications step by step. The Azure portal is suitable for developers who are using Azure cloud services for the first time.
+- Azure Developer CLI: Use the Azure Developer CLI to create resources and deploy applications through simple commands, and to cover application code and infrastructure as code files needed to provision the Azure resources. The Azure Developer CLI is suitable for developers who are familiar with Azure cloud services.
::: zone-end
Now you can access the deployed app to see whether it works. Use the following s
::: zone pivot="sc-enterprise"
-1. After the deployment has completed, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost.
+1. After the deployment is complete, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost.
-1. Use the following command to check the app's log to investigate any deployment issue:
+1. To check the app's log to investigate any deployment issue, use the following command:
```azurecli az spring app logs \
Now you can access the deployed app to see whether it works. Use the following s
1. Access the application with the output application URL. The page should appear as you saw in localhost.
-1. From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
+1. From the navigation menu of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
:::image type="content" source="media/quickstart-deploy-web-app/logs.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps logs page." lightbox="media/quickstart-deploy-web-app/logs.png":::
Now you can access the deployed app to see whether it works. Use the following s
::: zone pivot="sc-enterprise"
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following command to delete the resource group:
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. To delete the resource group, use the following command:
```azurecli az group delete --name ${RESOURCE_GROUP}
static-web-apps Build Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/build-configuration.md
inputs:
``` + ## Skip building the API If you want to skip building the API, you can bypass the automatic build and deploy the API built in a previous step.
inputs:
+## Run workflow without deployment secrets
+
+Sometimes you need your workflow to continue to process even when some secrets are missing. Set the `SKIP_DEPLOY_ON_MISSING_SECRETS` environment variable to `true` to configure your workflow to proceed without defined secrets.
+
+When enabled, this feature allows the workflow to continue without deploying the site's content.
+
+# [GitHub Actions](#tab/github-actions)
+
+```yaml
+...
+
+with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: 'upload'
+ app_location: 'src'
+ api_location: 'api'
+ output_location: 'public'
+env:
+ SKIP_DEPLOY_ON_MISSING_SECRETS: true
+```
+
+# [Azure Pipelines](#tab/azure-devops)
+
+```yaml
+...
+
+inputs:
+ app_location: 'src'
+ api_location: 'api'
+ output_location: 'public'
+ azure_static_web_apps_api_token: $(deployment_token)
+env:
+ SKIP_DEPLOY_ON_MISSING_SECRETS: true
+```
+++ ## Environment variables You can set environment variables for your build via the `env` section of a job's configuration.
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
Data must remain in the archive tier for at least 180 days or be subject to an e
While a blob is in the archive tier, it can't be read or modified. To read or download a blob in the archive tier, you must first rehydrate it to an online tier, either hot, cool, or cold. Data in the archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
-An archived blob's metadata remains available for read access, so that you can list the blob and its properties, metadata, and index tags. Metadata for a blob in the archive tier is read-only, while blob index tags can be read or written. Storage costs for metadata of archived blobs will be charged on Cool tier rates.
+An archived blob's metadata remains available for read access, so that you can list the blob and its properties, metadata, and index tags. Metadata for a blob in the archive tier is read-only, while blob index tags can be read or written. Storage costs for metadata of archived blobs will be charged on cool tier rates.
Snapshots aren't supported for archived blobs. The following operations are supported for blobs in the archive tier:
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Allowing or disallowing anonymous access requires version 2019-04-01 or later of
### Permissions for disallowing anonymous access
-To set the **AllowBlobPublicAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** action. Built-in roles with this action include:
+To set the **AllowBlobAnonymousAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** action. Built-in roles with this action include:
- The Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role - The Azure Resource Manager [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role
Get-AzStorageContainer -Context $ctx | Select Name, PublicAccess
- [Prevent anonymous read access to containers and blobs](anonymous-read-access-prevent.md) - [Access public containers and blobs anonymously with .NET](anonymous-read-access-client.md)-- [Authorizing access to Azure Storage](../common/authorize-data-access.md)
+- [Authorizing access to Azure Storage](../common/authorize-data-access.md)
storage Anonymous Read Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-overview.md
To remediate anonymous access, first determine whether your storage account uses
If your storage account is using the Azure Resource Manager deployment model, then you can remediate anonymous access for an account at any time by setting the account's **AllowBlobPublicAccess** property to **False**. After you set the **AllowBlobPublicAccess** property to **False**, all requests for blob data to that storage account will require authorization, regardless of the anonymous access setting for any individual container.
+If your storage account is using the Azure Resource Manager deployment model, then you can remediate anonymous access for an account at any time by setting the account's **AllowBlobAnonymousAccess** property to **False**. After you set the **AllowBlobAnonymousAccess** property to **False**, all requests for blob data to that storage account will require authorization, regardless of the anonymous access setting for any individual container.
+ To learn more about how to remediate anonymous access for Azure Resource Manager accounts, see [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md). ### Classic accounts
storage Archive Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-blob.md
Title: Archive a blob
-description: Learn how to create a blob in the Archive tier, or move an existing blob to the Archive tier.
+description: Learn how to create a blob in the archive tier, or move an existing blob to the archive tier.
# Archive a blob
-The Archive tier is an offline tier for storing blob data that is rarely accessed. The Archive tier offers the lowest storage costs, but higher data retrieval costs and latency compared to the online tiers (Hot and Cool). Data must remain in the Archive tier for at least 180 days or be subject to an early deletion charge. For more information about the Archive tier, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
+The archive tier is an offline tier for storing blob data that is rarely accessed. The archive tier offers the lowest storage costs, but higher data retrieval costs and latency compared to the online tiers (hot and cool). Data must remain in the archive tier for at least 180 days or be subject to an early deletion charge. For more information about the archive tier, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
-While a blob is in the Archive tier, it can't be read or modified. To read or download a blob in the Archive tier, you must first rehydrate it to an online tier, either Hot or Cool. Data in the Archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+While a blob is in the archive tier, it can't be read or modified. To read or download a blob in the archive tier, you must first rehydrate it to an online tier, either hot or cool. Data in the archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
> [!CAUTION]
-> A blob in the Archive tier is offline. That is, it cannot be read or modified until it is rehydrated. The rehydration process can take several hours and has associated costs. Before you move data to the Archive tier, consider whether taking blob data offline may affect your workflows.
+> A blob in the archive tier is offline. That is, it cannot be read or modified until it is rehydrated. The rehydration process can take several hours and has associated costs. Before you move data to the archive tier, consider whether taking blob data offline may affect your workflows.
You can use the Azure portal, PowerShell, Azure CLI, or one of the Azure Storage client libraries to manage data archiving. ## Archive blobs on upload
-To archive one ore more blob on upload, create the blob directly in the Archive tier.
+To archive one ore more blob on upload, create the blob directly in the archive tier.
### [Portal](#tab/azure-portal)
To archive a blob or set of blobs on upload from the Azure portal, follow these
1. Expand the **Advanced** section, and set the **Access tier** to *Archive*. 1. Select the **Upload** button.
- :::image type="content" source="media/archive-blob/upload-blobs-archive-portal.png" alt-text="Screenshot showing how to upload blobs to the Archive tier in the Azure portal":::
+ :::image type="content" source="media/archive-blob/upload-blobs-archive-portal.png" alt-text="Screenshot showing how to upload blobs to the archive tier in the Azure portal.":::
### [PowerShell](#tab/azure-powershell)
$ctx = New-AzStorageContext -StorageAccountName $storageAccount -UseConnectedAcc
# Create new container. New-AzStorageContainer -Name $containerName -Context $ctx
-# Upload a single file named blob1.txt to the Archive tier.
+# Upload a single file named blob1.txt to the archive tier.
Set-AzStorageBlobContent -Container $containerName ` -File "blob1.txt" ` -Blob "blob1.txt" ` -Context $ctx ` -StandardBlobTier Archive
-# Upload the contents of a sample-blobs directory to the Archive tier, recursively.
+# Upload the contents of a sample-blobs directory to the archive tier, recursively.
Get-ChildItem -Path "C:\sample-blobs" -File -Recurse | Set-AzStorageBlobContent -Container $containerName ` -Context $ctx `
az storage blob upload-batch \
### [AzCopy](#tab/azcopy)
-To archive a single blob on upload with AzCopy, call the [azcopy copy](../common/storage-ref-azcopy-copy.md) command. Provide a local file as the source and the target blob URI as the destination, and specify the Archive tier as the target tier, as shown in the following example. Remember to replace the placeholder values in brackets with your own values:
+To archive a single blob on upload with AzCopy, call the [azcopy copy](../common/storage-ref-azcopy-copy.md) command. Provide a local file as the source and the target blob URI as the destination, and specify the archive tier as the target tier, as shown in the following example. Remember to replace the placeholder values in brackets with your own values:
> [!NOTE] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). <br>This example excludes the SAS token because it assumes that you've provided authorization credentials by using Azure Active Directory (Azure AD). See the [Get started with AzCopy](../common/storage-use-azcopy-v10.md) article to learn about the ways that you can provide authorization credentials to the storage service.
For other examples, see [Upload files to Azure Blob storage by using AzCopy](../
## Archive an existing blob
-You can move an existing blob to the Archive tier in one of two ways:
+You can move an existing blob to the archive tier in one of two ways:
- You can change a blob's tier with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation. **Set Blob Tier** moves a single blob from one tier to another.
- Keep in mind that when you move a blob to the Archive tier with **Set Blob Tier**, then you can't read or modify the blob's data until you rehydrate the blob. If you may need to read or modify the blob's data before the early deletion interval has elapsed, then consider using a **Copy Blob** operation to create a copy of the blob in the Archive tier.
+ Keep in mind that when you move a blob to the archive tier with **Set Blob Tier**, then you can't read or modify the blob's data until you rehydrate the blob. If you may need to read or modify the blob's data before the early deletion interval has elapsed, then consider using a **Copy Blob** operation to create a copy of the blob in the archive tier.
-- You can copy a blob in an online tier to the Archive tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation. You can call the **Copy Blob** operation to copy a blob from an online tier (Hot or Cool) to the Archive tier. The source blob remains in the online tier, and you can continue to read or modify its data in the online tier.
+- You can copy a blob in an online tier to the archive tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation. You can call the **Copy Blob** operation to copy a blob from an online tier (hot or cool) to the archive tier. The source blob remains in the online tier, and you can continue to read or modify its data in the online tier.
### Archive an existing blob by changing its tier
-Use the **Set Blob Tier** operation to move a blob from the Hot or Cool tier to the Archive tier. The **Set Blob Tier** operation is best for scenarios where you won't need to access the archived data before the early deletion interval has elapsed.
+Use the **Set Blob Tier** operation to move a blob from the Hot or cool tier to the archive tier. The **Set Blob Tier** operation is best for scenarios where you won't need to access the archived data before the early deletion interval has elapsed.
-The **Set Blob Tier** operation changes the tier of a single blob. To move a set of blobs to the Archive tier with optimum performance, Microsoft recommends performing a bulk archive operation. The bulk archive operation sends a batch of **Set Blob Tier** calls to the service in a single transaction. For more information, see [Bulk archive](#bulk-archive).
+The **Set Blob Tier** operation changes the tier of a single blob. To move a set of blobs to the archive tier with optimum performance, Microsoft recommends performing a bulk archive operation. The bulk archive operation sends a batch of **Set Blob Tier** calls to the service in a single transaction. For more information, see [Bulk archive](#bulk-archive).
#### [Portal](#tab/azure-portal)
-To move an existing blob to the Archive tier in the Azure portal, follow these steps:
+To move an existing blob to the archive tier in the Azure portal, follow these steps:
1. Navigate to the blob's container. 1. Select the blob to archive.
To move an existing blob to the Archive tier in the Azure portal, follow these s
#### [PowerShell](#tab/azure-powershell)
-To change a blob's tier from Hot or Cool to Archive with PowerShell, use the blob's **BlobClient** property to return a .NET reference to the blob, then call the **SetAccessTier** method on that reference. Remember to replace placeholders in angle brackets with your own values:
+To change a blob's tier from hot or cool to Archive with PowerShell, use the blob's **BlobClient** property to return a .NET reference to the blob, then call the **SetAccessTier** method on that reference. Remember to replace placeholders in angle brackets with your own values:
```azurepowershell # Initialize these variables with your values.
$blob.BlobClient.SetAccessTier("Archive", $null)
#### [Azure CLI](#tab/azure-cli)
-To change a blob's tier from Hot or Cool to Archive with Azure CLI, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command. Remember to replace placeholders in angle brackets with your own values:
+To change a blob's tier from hot or cool to Archive with Azure CLI, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command. Remember to replace placeholders in angle brackets with your own values:
```azurecli az storage blob set-tier \
az storage blob set-tier \
### [AzCopy](#tab/azcopy)
-To change a blob's tier from Hot or Cool to Archive, use the [azcopy set-properties](..\common\storage-ref-azcopy-set-properties.md) command and set the `-block-blob-tier` parameter to `archive`.
+To change a blob's tier from hot or cool to Archive, use the [azcopy set-properties](..\common\storage-ref-azcopy-set-properties.md) command and set the `-block-blob-tier` parameter to `archive`.
> [!IMPORTANT] > The ability to change a blob's tier by using AzCopy is currently in PREVIEW.
azcopy set-properties 'https://<storage-account-name>.blob.core.windows.net/<con
### Archive an existing blob with a copy operation
-Use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from the Hot or Cool tier to the Archive tier. The source blob remains in the Hot or Cool tier, while the destination blob is created in the Archive tier.
+Use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from the hot or cool tier to the archive tier. The source blob remains in the hot or cool tier, while the destination blob is created in the archive tier.
A **Copy Blob** operation is best for scenarios where you may need to read or modify the archived data before the early deletion interval has elapsed. You can access the source blob's data without needing to rehydrate the archived blob.
N/A
#### [PowerShell](#tab/azure-powershell)
-To copy a blob from an online tier to the Archive tier with PowerShell, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the Archive tier. Remember to replace placeholders in angle brackets with your own values:
+To copy a blob from an online tier to the archive tier with PowerShell, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the archive tier. Remember to replace placeholders in angle brackets with your own values:
```azurepowershell # Initialize these variables with your values.
$ctx = (Get-AzStorageAccount `
-ResourceGroupName $rgName ` -Name $accountName).Context
-# Copy the source blob to a new destination blob in Archive tier.
+# Copy the source blob to a new destination blob in archive tier.
Start-AzStorageBlobCopy -SrcContainer $srcContainerName ` -SrcBlob $srcBlobName ` -DestContainer $destContainerName `
Start-AzStorageBlobCopy -SrcContainer $srcContainerName `
#### [Azure CLI](#tab/azure-cli)
-To copy a blob from an online tier to the Archive tier with Azure CLI, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and specify the Archive tier. Remember to replace placeholders in angle brackets with your own values:
+To copy a blob from an online tier to the archive tier with Azure CLI, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and specify the archive tier. Remember to replace placeholders in angle brackets with your own values:
```azurecli az storage blob copy start \
az storage blob copy start \
#### [AzCopy](#tab/azcopy)
-To copy a blob from an online tier to the Archive tier with AzCopy, specify the URI for the source blob and the URI for the destination blob. The destination blob should have a different name from the source blob, and shouldn't already exist.
+To copy a blob from an online tier to the archive tier with AzCopy, specify the URI for the source blob and the URI for the destination blob. The destination blob should have a different name from the source blob, and shouldn't already exist.
> [!NOTE] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). <br>This example excludes the SAS token because it assumes that you've provided authorization credentials by using Azure Active Directory (Azure AD). See the [Get started with AzCopy](../common/storage-use-azcopy-v10.md) article to learn about the ways that you can provide authorization credentials to the storage service.
N/A
-When moving a large number of blobs to the Archive tier, use a batch operation for optimal performance. A batch operation sends multiple API calls to the service with a single request. The suboperations supported by the [Blob Batch](/rest/api/storageservices/blob-batch) operation include [Delete Blob](/rest/api/storageservices/delete-blob) and [Set Blob Tier](/rest/api/storageservices/set-blob-tier).
+When moving a large number of blobs to the archive tier, use a batch operation for optimal performance. A batch operation sends multiple API calls to the service with a single request. The suboperations supported by the [Blob Batch](/rest/api/storageservices/blob-batch) operation include [Delete Blob](/rest/api/storageservices/delete-blob) and [Set Blob Tier](/rest/api/storageservices/set-blob-tier).
> [!NOTE] > The [Set Blob Tier](/rest/api/storageservices/set-blob-tier) suboperation of the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchical namespace.
For an in-depth sample application that shows how to change tiers with a batch o
## Use lifecycle management policies to archive blobs
-You can optimize costs for blob data that is rarely accessed by creating lifecycle management policies that automatically move blobs to the Archive tier when they haven't been accessed or modified for a specified period of time. After you configure a lifecycle management policy, Azure Storage runs it once per day. For more information about lifecycle management policies, see [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md).
+You can optimize costs for blob data that is rarely accessed by creating lifecycle management policies that automatically move blobs to the archive tier when they haven't been accessed or modified for a specified period of time. After you configure a lifecycle management policy, Azure Storage runs it once per day. For more information about lifecycle management policies, see [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md).
You can use the Azure portal, PowerShell, Azure CLI, or an Azure Resource Manager template to create a lifecycle management policy. For simplicity, this section shows how to create a lifecycle management policy in the Azure portal only. For more examples showing how to create lifecycle management policies, see [Configure a lifecycle management policy](lifecycle-management-policy-configure.md). > [!CAUTION]
-> Before you use a lifecycle management policy to move data to the Archive tier, verify that that data does not need to be deleted or moved to another tier for at least 180 days. Data that is deleted or moved to a different tier before the 180 day period has elapsed is subject to an early deletion fee.
+> Before you use a lifecycle management policy to move data to the archive tier, verify that that data does not need to be deleted or moved to another tier for at least 180 days. Data that is deleted or moved to a different tier before the 180 day period has elapsed is subject to an early deletion fee.
>
-> Also keep in mind that data in the Archive tier must be rehydrated before it can be read or modified. Rehydrating a blob from the Archive tier can take several hours and has associated costs.
+> Also keep in mind that data in the archive tier must be rehydrated before it can be read or modified. Rehydrating a blob from the archive tier can take several hours and has associated costs.
To create a lifecycle management policy to archive blobs in the Azure portal, follow these steps:
To create a lifecycle management policy to archive blobs in the Azure portal, fo
- Objects were created some number of days ago. - Objects were last accessed some number of days ago.
- Only one of these conditions can be applied to move a particular type of object to the Archive tier per rule. For example, if you define an action that archives base blobs if they haven't been modified for 90 days, then you can't also define an action that archives base blobs if they haven't been accessed for 90 days. Similarly, you can define one action per rule with either of these conditions to archive previous versions, and one to archive snapshots.
+ Only one of these conditions can be applied to move a particular type of object to the archive tier per rule. For example, if you define an action that archives base blobs if they haven't been modified for 90 days, then you can't also define an action that archives base blobs if they haven't been accessed for 90 days. Similarly, you can define one action per rule with either of these conditions to archive previous versions, and one to archive snapshots.
8. Next, specify the number of days to elapse after the object is modified or accessed.
-9. Specify that the object is to be moved to the Archive tier after the interval has elapsed.
+9. Specify that the object is to be moved to the archive tier after the interval has elapsed.
> [!div class="mx-imgBorder"] > ![Screenshot showing how to configure a lifecycle management policy - Base blob tab.](./media/archive-blob/lifecycle-policy-base-blobs-tab-portal.png)
Here's the JSON for the simple lifecycle management policy created in the images
## See also -- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)-- [Blob rehydration from the Archive tier](archive-rehydrate-overview.md)
+- [Hot, cool, and archive access tiers for blob data](access-tiers-overview.md)
+- [Blob rehydration from the archive tier](archive-rehydrate-overview.md)
- [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md)
storage Archive Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-cost-estimation.md
In this example, the total cost to rehydrate (retrieving + reading) would be $0.
> [!NOTE] > If you set the rehydration priority to high, then the data retrieval and read rates increase.
-If you plan to rehydrate data, you should try to avoid an early deletion fee. To review your options, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+If you plan to rehydrate data, you should try to avoid an early deletion fee. To review your options, see [Blob rehydration from the archive tier](archive-rehydrate-overview.md).
## Scenario: One-time data backup
This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in si
## Archive versus cold and cool
-Archive storage is the lowest cost tier. However, it can take up to 15 hours to rehydrate 10 GiB files. To learn more, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md). The archive tier might not be the best fit if your workloads must read data quickly. The cool tier offers a near real-time read latency with a lower price than that the hot tier. Understanding your access requirements will help you to choose between the cool, cold, and archive tiers.
+Archive storage is the lowest cost tier. However, it can take up to 15 hours to rehydrate 10 GiB files. To learn more, see [Blob rehydration from the archive tier](archive-rehydrate-overview.md). The archive tier might not be the best fit if your workloads must read data quickly. The cool tier offers a near real-time read latency with a lower price than that the hot tier. Understanding your access requirements will help you to choose between the cool, cold, and archive tiers.
The following table compares the cost of archive storage with the cost of cool and cold storage by using the [Sample prices](#sample-prices) that appear in this article. This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in size to archive. It also assumes 1 read each month about 10% of stored capacity (1024 GB), and 10% of total transactions (20,000). <br><br>
storage Archive Rehydrate Handle Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-handle-event.md
# Run an Azure Function in response to a blob rehydration event
-To read a blob that is in the Archive tier, you must first rehydrate the blob to the Hot or Cool tier. The rehydration process can take several hours to complete. Instead of repeatedly polling the status of the rehydration operation, you can configure [Azure Event Grid](../../event-grid/overview.md) to fire an event when the blob rehydration operation is complete and handle this event in your application.
+To read a blob that is in the archive tier, you must first rehydrate the blob to the hot or cool tier. The rehydration process can take several hours to complete. Instead of repeatedly polling the status of the rehydration operation, you can configure [Azure Event Grid](../../event-grid/overview.md) to fire an event when the blob rehydration operation is complete and handle this event in your application.
When an event occurs, Event Grid sends the event to an event handler via an endpoint. A number of Azure services can serve as event handlers, including [Azure Functions](../../azure-functions/functions-overview.md). An Azure Function is a block of code that can execute in response to an event. This how-to walks you through the process of developing an Azure Function and then configuring Event Grid to run the function in response to an event that occurs when a blob is rehydrated. This article shows you how to create and test an Azure Function with .NET from Visual Studio. You can build Azure Functions from a variety of local development environments and using a variety of different programming languages. For more information about supported languages for Azure Functions, see [Supported languages in Azure Functions](../../azure-functions/supported-languages.md). For more information about development options for Azure Functions, see [Code and test Azure Functions locally](../../azure-functions/functions-develop-local.md).
-For more information about rehydrating blobs from the Archive tier, see [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+For more information about rehydrating blobs from the archive tier, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
## Prerequisites
Whenever you make changes to the code in your Azure Function, you must publish t
You now have a function app that contains an Azure Function that can run in response to an event. The next step is to create an event subscription from your storage account. The event subscription configures the storage account to publish an event through Event Grid in response to an operation on a blob in your storage account. Event Grid then sends the event to the event handler endpoint that you've specified. In this case, the event handler is the Azure Function that you created in the previous section.
-When you create the event subscription, you can filter which events are sent to the event handler. The events to capture when rehydrating a blob from the Archive tier are **Microsoft.Storage.BlobTierChanged**, corresponding to a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, and **Microsoft.Storage.BlobCreated** events, corresponding to a [Copy Blob](/rest/api/storageservices/copy-blob) operation. Depending on your scenario, you may want to handle only one of these events.
+When you create the event subscription, you can filter which events are sent to the event handler. The events to capture when rehydrating a blob from the archive tier are **Microsoft.Storage.BlobTierChanged**, corresponding to a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, and **Microsoft.Storage.BlobCreated** events, corresponding to a [Copy Blob](/rest/api/storageservices/copy-blob) operation. Depending on your scenario, you may want to handle only one of these events.
To create the event subscription, follow these steps:
-1. In the Azure portal, navigate to the storage account that contains blobs to rehydrate from the Archive tier.
+1. In the Azure portal, navigate to the storage account that contains blobs to rehydrate from the archive tier.
1. Select the **Events** setting in the left navigation pane. 1. On the **Events** page, select **More options**. 1. Select **Create Event Subscription**. 1. On the **Create Event Subscription** page, in the **Event subscription details** section, provide a name for the event subscription. 1. In the **Topic details** section, provide a name for the system topic. The system topic represents one or more events that are published by Azure Storage. For more information about system topics, see [System topics in Azure Event Grid](../../event-grid/system-topics.md).
-1. In the **Event Types** section, select the **Blob Created** and **Blob Tier Changed** events. Depending on how you choose to rehydrate a blob from the Archive tier, one of these two events will fire.
+1. In the **Event Types** section, select the **Blob Created** and **Blob Tier Changed** events. Depending on how you choose to rehydrate a blob from the archive tier, one of these two events will fire.
:::image type="content" source="media/archive-rehydrate-handle-event/select-event-types-portal.png" alt-text="Screenshot showing how to select event types for blob rehydration events in the Azure portal":::
To learn how to test the function by rehydrating a blob, see one of these two pr
- [Rehydrate a blob with a copy operation](archive-rehydrate-to-online-tier.md#rehydrate-a-blob-with-a-copy-operation) - [Rehydrate a blob by changing its tier](archive-rehydrate-to-online-tier.md#rehydrate-a-blob-by-changing-its-tier)
-After the rehydration is complete, the log blob is written to the same container as the blob that you rehydrated. For example, after you rehydrate a blob with a copy operation, you can see in the Azure portal that the original source blob remains in the Archive tier, the fully rehydrated destination blob appears in the targeted online tier, and the log blob that was created by the Azure Function also appears in the list.
+After the rehydration is complete, the log blob is written to the same container as the blob that you rehydrated. For example, after you rehydrate a blob with a copy operation, you can see in the Azure portal that the original source blob remains in the archive tier, the fully rehydrated destination blob appears in the targeted online tier, and the log blob that was created by the Azure Function also appears in the list.
-Keep in mind that rehydrating a blob can take up to 15 hours, depending on the rehydration priority setting. If you set the rehydration priority to **High**, rehydration may complete in under one hour for blobs that are less than 10 GB in size. However, a high-priority rehydration incurs a greater cost. For more information, see [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+Keep in mind that rehydrating a blob can take up to 15 hours, depending on the rehydration priority setting. If you set the rehydration priority to **High**, rehydration may complete in under one hour for blobs that are less than 10 GB in size. However, a high-priority rehydration incurs a greater cost. For more information, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
> [!TIP]
-> Although the goal of this how-to is to handle these events in the context of blob rehydration, for testing purposes it may also be helpful to observe these events in response to uploading a blob or changing an online blob's tier (*i.e.*, from Hot to Cool), because the event fires immediately.
+> Although the goal of this how-to is to handle these events in the context of blob rehydration, for testing purposes it may also be helpful to observe these events in response to uploading a blob or changing an online blob's tier (*i.e.*, from hot to cool), because the event fires immediately.
For more information on how to filter events in Event Grid, see [How to filter events for Azure Event Grid](../../event-grid/how-to-filter-events.md). ## See also -- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)-- [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md)
+- [Hot, cool, and archive access tiers for blob data](access-tiers-overview.md)
+- [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md)
- [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md) - [Reacting to Blob storage events](storage-blob-event-overview.md)
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
Title: Blob rehydration from the Archive tier
-description: While a blob is in the Archive access tier, it's considered to be offline, and can't be read or modified. In order to read or modify data in an archived blob, you must first rehydrate the blob to an online tier, either the Hot or Cool tier.
+ Title: Blob rehydration from the archive tier
+description: While a blob is in the archive access tier, it's considered to be offline, and can't be read or modified. In order to read or modify data in an archived blob, you must first rehydrate the blob to an online tier, either the hot or cool tier.
-# Blob rehydration from the Archive tier
+# Blob rehydration from the archive tier
-While a blob is in the Archive access tier, it's considered to be offline, and can't be read or modified. In order to read or modify data in an archived blob, you must first rehydrate the blob to an online tier, either the Hot or Cool tier. There are two options for rehydrating a blob that is stored in the Archive tier:
+While a blob is in the archive access tier, it's considered to be offline, and can't be read or modified. In order to read or modify data in an archived blob, you must first rehydrate the blob to an online tier, either the hot or cool tier. There are two options for rehydrating a blob that is stored in the archive tier:
-- [Copy an archived blob to an online tier](#copy-an-archived-blob-to-an-online-tier): You can rehydrate an archived blob by copying it to a new blob in the Hot or Cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation. Microsoft recommends this option for most scenarios.
+- [Copy an archived blob to an online tier](#copy-an-archived-blob-to-an-online-tier): You can rehydrate an archived blob by copying it to a new blob in the hot or cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation. Microsoft recommends this option for most scenarios.
-- [Change an archived blob's access tier to an online tier](#change-a-blobs-access-tier-to-an-online-tier): You can rehydrate an archived blob to the Hot or Cool tier by changing its tier using the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
+- [Change an archived blob's access tier to an online tier](#change-a-blobs-access-tier-to-an-online-tier): You can rehydrate an archived blob to the hot or cool tier by changing its tier using the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
-Rehydrating a blob from the Archive tier can take several hours to complete. Microsoft recommends archiving larger blobs for optimal performance when rehydrating. Rehydrating a large number of small blobs may require extra time due to the processing overhead on each blob. A maximum of 10 GiB per storage account may be rehydrated per hour with priority retrieval.
+Rehydrating a blob from the archive tier can take several hours to complete. Microsoft recommends archiving larger blobs for optimal performance when rehydrating. Rehydrating a large number of small blobs may require extra time due to the processing overhead on each blob. A maximum of 10 GiB per storage account may be rehydrated per hour with priority retrieval.
To learn how to rehydrate an archived blob to an online tier, see [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md).
For more information on pricing differences between standard-priority and high-p
## Copy an archived blob to an online tier
-The first option for moving a blob from the Archive tier to an online tier is to copy the archived blob to a new destination blob that is in either the Hot or Cool tier. You can use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy the blob. When you copy an archived blob to a new blob in an online tier, the source blob remains unmodified in the Archive tier.
+The first option for moving a blob from the archive tier to an online tier is to copy the archived blob to a new destination blob that is in either the hot or cool tier. You can use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy the blob. When you copy an archived blob to a new blob in an online tier, the source blob remains unmodified in the archive tier.
You must copy the archived blob to a new blob with a different name or to a different container. You can't overwrite the source blob by copying to the same blob.
-Microsoft recommends performing a copy operation in most scenarios where you need to move a blob from the Archive tier to an online tier, for the following reasons:
+Microsoft recommends performing a copy operation in most scenarios where you need to move a blob from the archive tier to an online tier, for the following reasons:
-- A copy operation avoids the early deletion fee that is assessed if you change the tier of a blob from the Archive tier before the required 180-day period elapses. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
+- A copy operation avoids the early deletion fee that is assessed if you change the tier of a blob from the archive tier before the required 180-day period elapses. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
-- If there's a lifecycle management policy in effect for the storage account, then rehydrating a blob with [Set Blob Tier](/rest/api/storageservices/set-blob-tier) can result in a scenario where the lifecycle policy moves the blob back to the Archive tier after rehydration because the last modified time is beyond the threshold set for the policy. A copy operation leaves the source blob in the Archive tier and creates a new blob with a different name and a new last modified time, so there's no risk that the rehydrated blob will be moved back to the Archive tier by the lifecycle policy.
+- If there's a lifecycle management policy in effect for the storage account, then rehydrating a blob with [Set Blob Tier](/rest/api/storageservices/set-blob-tier) can result in a scenario where the lifecycle policy moves the blob back to the archive tier after rehydration because the last modified time is beyond the threshold set for the policy. A copy operation leaves the source blob in the archive tier and creates a new blob with a different name and a new last modified time, so there's no risk that the rehydrated blob will be moved back to the archive tier by the lifecycle policy.
-Copying a blob from the Archive tier can take hours to complete depending on the rehydration priority selected. Behind the scenes, a blob copy operation reads your archived source blob to create a new online blob in the selected destination tier. The new blob may be visible when you list the blobs in the parent container before the rehydration operation is complete, but its tier will be set to Archive. The data isn't available until the read operation from the source blob in the Archive tier is complete and the blob's contents have been written to the new destination blob in an online tier. The new blob is an independent copy, so modifying or deleting it doesn't affect the source blob in the Archive tier.
+Copying a blob from the archive tier can take hours to complete depending on the rehydration priority selected. Behind the scenes, a blob copy operation reads your archived source blob to create a new online blob in the selected destination tier. The new blob may be visible when you list the blobs in the parent container before the rehydration operation is complete, but its tier will be set to archive. The data isn't available until the read operation from the source blob in the archive tier is complete and the blob's contents have been written to the new destination blob in an online tier. The new blob is an independent copy, so modifying or deleting it doesn't affect the source blob in the archive tier.
To learn how to rehydrate a blob by copying it to an online tier, see [Rehydrate a blob with a copy operation](archive-rehydrate-to-online-tier.md#rehydrate-a-blob-with-a-copy-operation).
To learn how to rehydrate a blob by copying it to an online tier, see [Rehydrate
Rehydrating an archived blob by copying it to an online destination tier is supported within the same storage account only for service versions prior to 2021-02-12. Beginning with service version 2021-02-12, you can rehydrate an archived blob by copying it to a different storage account, as long as the destination account is in the same region as the source account. Rehydration across storage accounts enables you to segregate your production data from your backup data, by maintaining them in separate accounts. Isolating archived data in a separate account can also help to mitigate costs from unintentional rehydration.
-The target blob for the copy operation must be in an online tier (Hot or Cool). You can't copy an archived blob to a destination blob that is also in the Archive tier.
+The target blob for the copy operation must be in an online tier (hot or cool). You can't copy an archived blob to a destination blob that is also in the archive tier.
The following table shows the behavior of a blob copy operation, depending on the tiers of the source and destination blob.
To learn more about obtaining read access to secondary regions, see [Read access
## Change a blob's access tier to an online tier
-The second option for rehydrating a blob from the Archive tier to an online tier is to change the blob's tier by calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier). With this operation, you can change the tier of the archived blob to either Hot or Cool.
+The second option for rehydrating a blob from the archive tier to an online tier is to change the blob's tier by calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier). With this operation, you can change the tier of the archived blob to either hot or cool.
Once a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) request is initiated, it can't be canceled. During the rehydration operation, the blob's access tier setting continues to show as archived until the rehydration process is complete. When the rehydration operation is complete, the blob's access tier property updates to reflect the new tier. To learn how to rehydrate a blob by changing its tier to an online tier, see [Rehydrate a blob by changing its tier](archive-rehydrate-to-online-tier.md#rehydrate-a-blob-by-changing-its-tier). > [!CAUTION]
-> Changing a blob's tier doesn't affect its last modified time. If there is a [lifecycle management](./lifecycle-management-overview.md) policy in effect for the storage account, then rehydrating a blob with **Set Blob Tier** can result in a scenario where the lifecycle policy moves the blob back to the Archive tier after rehydration because the last modified time is beyond the threshold set for the policy.
+> Changing a blob's tier doesn't affect its last modified time. If there is a [lifecycle management](./lifecycle-management-overview.md) policy in effect for the storage account, then rehydrating a blob with **Set Blob Tier** can result in a scenario where the lifecycle policy moves the blob back to the archive tier after rehydration because the last modified time is beyond the threshold set for the policy.
> > To avoid this scenario, add the `daysAfterLastTierChangeGreaterThan` condition to the `tierToArchive` action of the policy. Alternatively, you can rehydrate the archived blob by copying it instead, as described in the [Copy an archived blob to an online tier](#copy-an-archived-blob-to-an-online-tier) section. Performing a copy operation creates a new instance of the blob with an updated last modified time, so it won't trigger the lifecycle management policy.
Rehydration of an archived blob may take up to 15 hours, and it is inefficient t
Azure Event Grid raises one of the following two events on blob rehydration, depending on which operation was used to rehydrate the blob: -- The **Microsoft.Storage.BlobCreated** event fires when a blob is created. In the context of blob rehydration, this event fires when a [Copy Blob](/rest/api/storageservices/copy-blob) operation creates a new destination blob in either the Hot or Cool tier and the blob's data is fully rehydrated from the Archive tier. If the account has the **hierarchical namespace** feature enabled on it, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** and not when the Block Blob is completely committed.
+- The **Microsoft.Storage.BlobCreated** event fires when a blob is created. In the context of blob rehydration, this event fires when a [Copy Blob](/rest/api/storageservices/copy-blob) operation creates a new destination blob in either the hot or cool tier and the blob's data is fully rehydrated from the archive tier. If the account has the **hierarchical namespace** feature enabled on it, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** and not when the Block Blob is completely committed.
-- The **Microsoft.Storage.BlobTierChanged** event fires when a blob's tier is changed. In the context of blob rehydration, this event fires when a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation successfully changes an archived blob's tier to the Hot or Cool tier.
+- The **Microsoft.Storage.BlobTierChanged** event fires when a blob's tier is changed. In the context of blob rehydration, this event fires when a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation successfully changes an archived blob's tier to the hot or cool tier.
To learn how to capture an event on rehydration and send it to an Azure Function event handler, see [Run an Azure Function in response to a blob rehydration event](archive-rehydrate-handle-event.md).
For more information on handling events in Blob Storage, see [Reacting to Azure
A rehydration operation with [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is billed for data read transactions and data retrieval size. A high-priority rehydration has higher operation and data retrieval costs compared to standard priority. High-priority rehydration shows up as a separate line item on your bill. If a high-priority request to return an archived blob that is less than 10 GB in size takes more than five hours, you won't be charged the high-priority retrieval rate. However, standard retrieval rates still apply.
-Copying an archived blob to an online tier with [Copy Blob](/rest/api/storageservices/copy-blob) is billed for data read transactions and data retrieval size. Creating the destination blob in an online tier is billed for data write transactions. Early deletion fees don't apply when you copy to an online blob because the source blob remains unmodified in the Archive tier. High-priority retrieval charges do apply if selected.
+Copying an archived blob to an online tier with [Copy Blob](/rest/api/storageservices/copy-blob) is billed for data read transactions and data retrieval size. Creating the destination blob in an online tier is billed for data write transactions. Early deletion fees don't apply when you copy to an online blob because the source blob remains unmodified in the archive tier. High-priority retrieval charges do apply if selected.
-Blobs in the Archive tier should be stored for a minimum of 180 days. Deleting or changing the tier of an archived blob before the 180-day period elapses incurs an early deletion fee. For example, if a blob is moved to the Archive tier and then deleted or moved to the Hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the Archive tier. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
+Blobs in the archive tier should be stored for a minimum of 180 days. Deleting or changing the tier of an archived blob before the 180-day period elapses incurs an early deletion fee. For example, if a blob is moved to the archive tier and then deleted or moved to the hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the archive tier. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
For more information about pricing for block blobs and data rehydration, see [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/). For more information on outbound data transfer charges, see [Data Transfers Pricing Details](https://azure.microsoft.com/pricing/details/data-transfers/). ## See also -- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
+- [hot, cool, and archive access tiers for blob data](access-tiers-overview.md)
- [Archive a blob](archive-blob.md) - [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md) - [Run an Azure Function in response to a blob rehydration event](archive-rehydrate-handle-event.md)
storage Blob Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md
az storage blob snapshot \
## Set blob tier
-When you change a blob's tier, you move the blob and all of its data to the target tier. You can change the tier between **Hot**, **Cool**, and **Archive** with the `az storage blob set-tier` command.
+When you change a blob's tier, you move the blob and all of its data to the target tier. You can change the tier between **hot**, **cool**, and **archive** with the `az storage blob set-tier` command.
Depending on your requirements, you may also utilize the *Copy Blob* operation to copy a blob from one tier to another. The *Copy Blob* operation will create a new blob in the desired tier while leaving the source blob remains in the original tier.
-Changing tiers from **Cool** or **Hot** to **Archive** takes place almost immediately. After a blob is moved to the **Archive** tier, it's considered to be offline and can't be read or modified. Before you can read or modify an archived blob's data, you'll need to rehydrate it to an online tier. Read more about [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+Changing tiers from **cool** or **hot** to **archive** takes place almost immediately. After a blob is moved to the **archive** tier, it's considered to be offline and can't be read or modified. Before you can read or modify an archived blob's data, you'll need to rehydrate it to an online tier. Read more about [Blob rehydration from the archive tier](archive-rehydrate-overview.md).
For additional information, see the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) reference.
-The following sample code sets the tier to **Hot** for a single, named blob within the `archive` container.
+The following sample code sets the tier to **hot** for a single, named blob within the `archive` container.
```azurecli-interactive #!/bin/bash
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-powershell.md
$blob.BlobClient.CreateSnapshot()
## Set blob tier
-When you change a blob's tier, you move the blob and all of its data to the target tier. To make the change, retrieve a blob with the `Get-AzStorageBlob` cmdlet, and call the `BlobClient.SetAccessTier` method. This approach can be used to change the tier between **Hot**, **Cool**, and **Archive**.
+When you change a blob's tier, you move the blob and all of its data to the target tier. To make the change, retrieve a blob with the `Get-AzStorageBlob` cmdlet, and call the `BlobClient.SetAccessTier` method. This approach can be used to change the tier between **hot**, **cool**, and **archive**.
-Changing tiers from **Cool** or **Hot** to **Archive** take place almost immediately. After a blob is moved to the **Archive** tier, it's considered to be offline, and can't be read or modified. Before you can read or modify an archived blob's data, you need to rehydrate it to an online tier. Read more about [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+Changing tiers from **cool** or **hot** to **archive** take place almost immediately. After a blob is moved to the **archive** tier, it's considered to be offline, and can't be read or modified. Before you can read or modify an archived blob's data, you need to rehydrate it to an online tier. Read more about [Blob rehydration from the archive tier](archive-rehydrate-overview.md).
-The following sample code sets the tier to **Hot** for all blobs within the `archive` container.
+The following sample code sets the tier to **hot** for all blobs within the `archive` container.
```azurepowershell $blobs = Get-AzStorageBlob -Container archive -Context $ctx
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
If you fail to pay your bill and your account has an active time-based retention
## Feature support
+This feature is incompatible with Point in Time Restore and Last Access Tracking.
+ [!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)] ## Next steps
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Object replication does not copy the source blob's index tags to the destination
### Blob tiering
-Object replication is supported when the source and destination accounts are in the hot or cool tier. The source and destination accounts may be in different tiers. However, object replication will fail if a blob in either the source or destination account has been moved to the archive tier. For more information on blob tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+Object replication is supported when the source and destination accounts are in the hot or cool tier. The source and destination accounts may be in different tiers. However, object replication will fail if a blob in either the source or destination account has been moved to the archive tier. For more information on blob tiers, see [Access tiers for blob data](access-tiers-overview.md).
### Immutable blobs
You can also specify one or more filters as part of a replication rule to filter
The source and destination containers must both exist before you can specify them in a rule. After you create the replication policy, write operations to the destination container aren't permitted. Any attempts to write to the destination container fail with error code 409 (Conflict). To write to a destination container for which a replication rule is configured, you must either delete the rule that is configured for that container, or remove the replication policy. Read and delete operations to the destination container are permitted when the replication policy is active.
-You can call the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation on a blob in the destination container to move it to the archive tier. For more information about the archive tier, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md#archive-access-tier).
+You can call the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation on a blob in the destination container to move it to the archive tier. For more information about the archive tier, see [Access tiers for blob data](access-tiers-overview.md#archive-access-tier).
> [!NOTE] > Changing the access tier of a blob in the source account won't change the access tier of that blob in the destination account.
If the replication status for a blob in the source account indicates failure, th
- Verify that the destination container is not in the process of being deleted, or has not just been deleted. Deleting a container may take up to 30 seconds. - Verify that the destination container is still participating in the object replication policy. - If the source blob has been encrypted with a customer-provided key as part of a write operation, then object replication will fail. For more information about customer-provided keys, see [Provide an encryption key on a request to Blob storage](encryption-customer-provided-keys.md).-- Check whether the source or destination blob has been moved to the Archive tier. Archived blobs cannot be replicated via object replication. For more information about the Archive tier, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+- Check whether the source or destination blob has been moved to the archive tier. Archived blobs cannot be replicated via object replication. For more information about the archive tier, see [Access tiers for blob data](access-tiers-overview.md).
- Verify that destination container or blob is not protected by an immutability policy. Keep in mind that a container or blob can inherit an immutability policy from its parent. For more information about immutability policies, see [Overview of immutable storage for blob data](immutable-storage-overview.md). ## Feature support
storage Snapshots Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-overview.md
Title: Blob snapshots
-description: Understand how blob snapshots work and how they are billed.
+description: Understand how blob snapshots work and how they're billed.
A snapshot of a blob is identical to its base blob, except that the blob URI has
> [!NOTE] > All snapshots share the base blob's URI. The only distinction between the base blob and the snapshot is the appended **DateTime** value.
-A blob can have any number of snapshots. Snapshots persist until they are explicitly deleted, either independently or as part of a [Delete Blob](/rest/api/storageservices/delete-blob) operation for the base blob. You can enumerate the snapshots associated with the base blob to track your current snapshots.
+A blob can have any number of snapshots. Snapshots persist until they're explicitly deleted, either independently or as part of a [Delete Blob](/rest/api/storageservices/delete-blob) operation for the base blob. You can enumerate the snapshots associated with the base blob to track your current snapshots.
-When you create a snapshot of a blob, the blob's system properties are copied to the snapshot with the same values. The base blob's metadata is also copied to the snapshot, unless you specify separate metadata for the snapshot when you create it. After you create a snapshot, you can read, copy, or delete it, but you cannot modify it.
+When you create a snapshot of a blob, the blob's system properties are copied to the snapshot with the same values. The base blob's metadata is also copied to the snapshot, unless you specify separate metadata for the snapshot when you create it. After you create a snapshot, you can read, copy, or delete it, but you can't modify it.
-Any leases associated with the base blob do not affect the snapshot. You cannot acquire a lease on a snapshot.
+Any leases associated with the base blob don't affect the snapshot. You can't acquire a lease on a snapshot.
-You can create a snapshot of a blob in the Hot or Cool tier. Snapshots on blobs in the Archive tier are not supported.
+You can create a snapshot of a blob in the hot or cool tier. Snapshots on blobs in the archive tier aren't supported.
A VHD file is used to store the current information and status for a VM disk. You can detach a disk from within the VM or shut down the VM, and then take a snapshot of its VHD file. You can use that snapshot file later to retrieve the VHD file at that point in time and recreate the VM. ## Pricing and billing
-Creating a snapshot, which is a read-only copy of a blob, can result in additional data storage charges to your account. When designing your application, it is important to be aware of how these charges might accrue so that you can minimize costs.
+Creating a snapshot, which is a read-only copy of a blob, can result in extra data storage charges to your account. When designing your application, it's important to be aware of how these charges might accrue so that you can minimize costs.
-Blob snapshots, like blob versions, are billed at the same rate as active data. How snapshots are billed depends on whether you have explicitly set the tier for the base blob or for any of its snapshots (or versions). For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+Blob snapshots, like blob versions, are billed at the same rate as active data. How snapshots are billed depends on whether you have explicitly set the tier for the base blob or for any of its snapshots (or versions). For more information about blob tiers, see [Access tiers for blob data](access-tiers-overview.md).
-If you have not changed a blob or snapshot's tier, then you are billed for unique blocks of data across that blob, its snapshots, and any versions it may have. For more information, see [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
+If you haven't changed a blob or snapshot's tier, then you're billed for unique blocks of data across that blob, its snapshots, and any versions it may have. For more information, see [Billing when the blob tier hasn't been explicitly set](#billing-when-the-blob-tier-hasnt-been-explicitly-set).
-If you have changed a blob or snapshot's tier, then you are billed for the entire object, regardless of whether the blob and snapshot are eventually in the same tier again. For more information, see [Billing when the blob tier has been explicitly set](#billing-when-the-blob-tier-has-been-explicitly-set).
+If you have changed a blob or snapshot's tier, then you're billed for the entire object, regardless of whether the blob and snapshot are eventually in the same tier again. For more information, see [Billing when the blob tier has been explicitly set](#billing-when-the-blob-tier-hasnt-been-explicitly-set).
For more information about billing details for blob versions, see [Blob versioning](versioning-overview.md).
For more information about billing details for blob versions, see [Blob versioni
Microsoft recommends managing your snapshots carefully to avoid extra charges. You can follow these best practices to help minimize the costs incurred by the storage of your snapshots: -- Delete and re-create snapshots associated with a blob whenever you update the blob, even if you are updating with identical data, unless your application design requires that you maintain snapshots. By deleting and re-creating the blob's snapshots, you can ensure that the blob and snapshots do not diverge.-- If you are maintaining snapshots for a blob, avoid calling methods that overwrite the entire blob when you update the blob. Instead, update the fewest possible number of blocks in order to keep costs low.
+- Delete and re-create snapshots associated with a blob whenever you update the blob, even if you're updating with identical data, unless your application design requires that you maintain snapshots. By deleting and re-creating the blob's snapshots, you can ensure that the blob and snapshots don't diverge.
+- If you're maintaining snapshots for a blob, avoid calling methods that overwrite the entire blob when you update the blob. Instead, update the fewest possible number of blocks in order to keep costs low.
-### Billing when the blob tier has not been explicitly set
+### Billing when the blob tier hasn't been explicitly set
-If you have not explicitly set the blob tier for a base blob or any of its snapshots, then you are charged for unique blocks or pages across the blob, its snapshots, and any versions it may have. Data that is shared across a blob and its snapshots is charged only once. When a blob is updated, then data in a base blob diverges from the data stored in its snapshots, and the unique data is charged per block or page.
+If you have not explicitly set the blob tier for a base blob or any of its snapshots, then you're charged for unique blocks or pages across the blob, its snapshots, and any versions it may have. Data that is shared across a blob and its snapshots is charged only once. When a blob is updated, then data in a base blob diverges from the data stored in its snapshots, and the unique data is charged per block or page.
-When you replace a block within a block blob, that block is subsequently charged as a unique block. This is true even if the block has the same block ID and the same data as it has in the snapshot. After the block is committed again, it diverges from its counterpart in the snapshot, and you will be charged for its data. The same holds true for a page in a page blob that's updated with identical data.
+When you replace a block within a block blob, that block is later charged as a unique block. This is true even if the block has the same block ID and the same data as it has in the snapshot. After the block is committed again, it diverges from its counterpart in the snapshot, and you'll be charged for its data. The same holds true for a page in a page blob that's updated with identical data.
-Blob storage does not have a means to determine whether two blocks contain identical data. Each block that is uploaded and committed is treated as unique, even if it has the same data and the same block ID. Because charges accrue for unique blocks, it's important to keep in mind that updating a blob when that blob has snapshots or versions will result in additional unique blocks and additional charges.
+Blob storage doesn't have a means to determine whether two blocks contain identical data. Each block that is uploaded and committed is treated as unique, even if it has the same data and the same block ID. Because charges accrue for unique blocks, it's important to keep in mind that updating a blob when that blob has snapshots or versions results in extra unique blocks and extra charges.
-When a blob has snapshots, call update operations on block blobs so that they update the least possible number of blocks. The write operations that permit fine-grained control over blocks are [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list). The [Put Blob](/rest/api/storageservices/put-blob) operation, on the other hand, replaces the entire contents of a blob and so may lead to additional charges.
+When a blob has snapshots, call update operations on block blobs so that they update the least possible number of blocks. The write operations that permit fine-grained control over blocks are [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list). The [Put Blob](/rest/api/storageservices/put-blob) operation, on the other hand, replaces the entire contents of a blob and so may lead to extra charges.
The following scenarios demonstrate how charges accrue for a block blob and its snapshots when the blob tier has not been explicitly set. #### Scenario 1
-In scenario 1, the base blob has not been updated after the snapshot was taken, so charges are incurred only for unique blocks 1, 2, and 3.
+In scenario 1, the base blob hasn't been updated after the snapshot was taken, so charges are incurred only for unique blocks 1, 2, and 3.
![Diagram 1 showing billing for unique blocks in base blob and snapshot.](./media/snapshots-overview/storage-blob-snapshots-billing-scenario-1.png) #### Scenario 2
-In scenario 2, the base blob has been updated, but the snapshot has not. Block 3 was updated, and even though it contains the same data and the same ID, it is not the same as block 3 in the snapshot. As a result, the account is charged for four blocks.
+In scenario 2, the base blob has been updated, but the snapshot hasn't. Block 3 was updated, and even though it contains the same data and the same ID, it isn't the same as block 3 in the snapshot. As a result, the account is charged for four blocks.
![Diagram 2 showing billing for unique blocks in base blob and snapshot.](./media/snapshots-overview/storage-blob-snapshots-billing-scenario-2.png) #### Scenario 3
-In scenario 3, the base blob has been updated, but the snapshot has not. Block 3 was replaced with block 4 in the base blob, but the snapshot still reflects block 3. As a result, the account is charged for four blocks.
+In scenario 3, the base blob has been updated, but the snapshot hasn't. Block 3 was replaced with block 4 in the base blob, but the snapshot still reflects block 3. As a result, the account is charged for four blocks.
![Diagram 3 showing billing for unique blocks in base blob and snapshot.](./media/snapshots-overview/storage-blob-snapshots-billing-scenario-3.png)
In scenario 4, the base blob has been completely updated and contains none of it
### Billing when the blob tier has been explicitly set
-If you have explicitly set the blob tier for a blob or snapshot (or version), then you are charged for the full content length of the object in the new tier, regardless of whether it shares blocks with an object in the original tier. You are also charged for the full content length of the oldest version in the original tier. Any versions or snapshots that remain in the original tier are charged for unique blocks that they may share, as described in [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
+If you have explicitly set the blob tier for a blob or snapshot (or version), then you're charged for the full content length of the object in the new tier, regardless of whether it shares blocks with an object in the original tier. You're also charged for the full content length of the oldest version in the original tier. Any versions or snapshots that remain in the original tier are charged for unique blocks that they may share, as described in [Billing when the blob tier hasn't been explicitly set](#billing-when-the-blob-tier-hasnt-been-explicitly-set).
#### Moving a blob to a new tier
-The following table describes the billing behavior for a blob or snapshot when it is moved to a new tier.
+The following table describes the billing behavior for a blob or snapshot when it's moved to a new tier.
-| When blob tier is set explicitly on... | Then you are billed for... |
+| When blob tier is set explicitly on... | Then you're billed for... |
|-|-| | A base blob with a snapshot | The base blob in the new tier and the oldest snapshot in the original tier, plus any unique blocks in other snapshots.<sup>1</sup> | | A base blob with a previous version and a snapshot | The base blob in the new tier, the oldest version in the original tier, and the oldest snapshot in the original tier, plus any unique blocks in other versions or snapshots<sup>1</sup>. | | A snapshot | The snapshot in the new tier and the base blob in the original tier, plus any unique blocks in other snapshots.<sup>1</sup> |
-<sup>1</sup>If there are other previous versions or snapshots that have not been moved from their original tier, those versions or snapshots are charged based on the number of unique blocks they contain, as described in [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
+<sup>1</sup>If there are other previous versions or snapshots that haven't been moved from their original tier, those versions or snapshots are charged based on the number of unique blocks they contain, as described in [Billing when the blob tier hasn't been explicitly set](#billing-when-the-blob-tier-hasnt-been-explicitly-set).
The following diagram illustrates how objects are billed when a blob with snapshots is moved to a different tier. :::image type="content" source="media/snapshots-overview/snapshot-billing-tiers.png" alt-text="Diagram showing how objects are billed when a blob with snapshots is explicitly tiered.":::
-Explicitly setting the tier for a blob, version, or snapshot cannot be undone. If you move a blob to a new tier and then move it back to its original tier, you are charged for the full content length of the object even if it shares blocks with other objects in the original tier.
+Explicitly setting the tier for a blob, version, or snapshot can't be undone. If you move a blob to a new tier and then move it back to its original tier, you're charged for the full content length of the object even if it shares blocks with other objects in the original tier.
Operations that explicitly set the tier of a blob, version, or snapshot include:
When blob soft delete is enabled, if you delete or overwrite a base blob that ha
The following table describes the billing behavior for a blob that is soft-deleted, depending on whether versioning is enabled or disabled. When versioning is enabled, a new version is created when a blob is soft-deleted. When versioning is disabled, soft-deleting a blob creates a soft-delete snapshot.
-| When you overwrite a base blob with its tier explicitly set... | Then you are billed for... |
+| When you overwrite a base blob with its tier explicitly set... | Then you're billed for... |
|-|-| | If blob soft delete and versioning are both enabled | All existing versions at full content length regardless of tier. | | If blob soft delete is enabled but versioning is disabled | All existing soft-delete snapshots at full content length regardless of tier. |
storage Storage Blob Container Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md
# Delete and restore a blob container with Java + This article shows how to delete containers with the [Azure Storage client library for Java](/jav), you can restore deleted containers. ## Prerequisites
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
# Delete and restore a blob container with JavaScript + This article shows how to delete containers with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers. ## Prerequisites
storage Storage Blob Container Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-python.md
# Delete and restore a blob container with Python + This article shows how to delete containers with the [Azure Storage client library for Python](/python/api/overview/azure/storage). If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers. ## Prerequisites
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
# Delete and restore a blob container with TypeScript + This article shows how to delete containers with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers. ## Prerequisites
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
# Delete and restore a blob container with .NET + This article shows how to delete containers with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers. ## Prerequisites
storage Storage Blob Container Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md
# Create and manage container leases with Java + This article shows how to create and manage container leases using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break container leases. ## Prerequisites
storage Storage Blob Container Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md
# Create and manage container leases with JavaScript + This article shows how to create and manage container leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break container leases. ## Prerequisites
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
# Create and manage container leases with Python + This article shows how to create and manage container leases using the [Azure Storage client library for Python](/python/api/overview/azure/storage). You can use the client library to acquire, renew, release and break container leases. ## Prerequisites
storage Storage Blob Container Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md
# Create and manage container leases with TypeScript + This article shows how to create and manage container leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break container leases. ## Prerequisites
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
# Create and manage container leases with .NET + This article shows how to create and manage container leases using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can use the client library to acquire, renew, release, and break container leases. ## Prerequisites
storage Storage Blob Container Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md
# Manage container properties and metadata with Java + Blob containers support system properties and user-defined metadata, in addition to the data they contain. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). ## Prerequisites
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
# Manage container properties and metadata with JavaScript + Blob containers support system properties and user-defined metadata, in addition to the data they contain. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Container Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md
# Manage container properties and metadata with Python + Blob containers support system properties and user-defined metadata, in addition to the data they contain. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for Python](/python/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Container Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md
# Manage container properties and metadata with TypeScript + Blob containers support system properties and user-defined metadata, in addition to the data they contain. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
# Manage container properties and metadata with .NET + Blob containers support system properties and user-defined metadata, in addition to the data they contain. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Containers List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md
# List blob containers with Java + When you list the containers in an Azure Storage account from your code, you can specify several options to manage how results are returned from Azure Storage. This article shows how to list containers using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). ## Prerequisites
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
# List blob containers with JavaScript + When you list the containers in an Azure Storage account from your code, you can specify a number of options to manage how results are returned from Azure Storage. This article shows how to list containers using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Containers List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md
# List blob containers with Python + When you list the containers in an Azure Storage account from your code, you can specify several options to manage how results are returned from Azure Storage. This article shows how to list containers using the [Azure Storage client library for Python](/python/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Containers List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md
# List blob containers with TypeScript + When you list the containers in an Azure Storage account from your code, you can specify a number of options to manage how results are returned from Azure Storage. This article shows how to list containers using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
# List blob containers with .NET + When you list the containers in an Azure Storage account from your code, you can specify a number of options to manage how results are returned from Azure Storage. This article shows how to list containers using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md
# Create and manage blob leases with Java + This article shows how to create and manage blob leases using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break blob leases. ## Prerequisites
storage Storage Blob Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md
# Create and manage blob leases with JavaScript + This article shows how to create and manage blob leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break blob leases. ## Prerequisites
storage Storage Blob Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md
# Create and manage blob leases with Python + This article shows how to create and manage blob leases using the [Azure Storage client library for Python](/python/api/overview/azure/storage). You can use the client library to acquire, renew, release, and break blob leases. ## Prerequisites
storage Storage Blob Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md
# Create and manage blob leases with TypeScript + This article shows how to create and manage blob leases using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break blob leases. ## Prerequisites
storage Storage Blob Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease.md
# Create and manage blob leases with .NET + This article shows how to create and manage blob leases using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can use the client library to acquire, renew, release, and break blob leases. ## Prerequisites
storage Storage Blob Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md
# Manage blob properties and metadata with Java + In addition to the data they contain, blobs support system properties and user-defined metadata. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). ## Prerequisites
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
# Manage blob properties and metadata with JavaScript + In addition to the data they contain, blobs support system properties and user-defined metadata. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md
# Manage blob properties and metadata with Python + In addition to the data they contain, blobs support system properties and user-defined metadata. This article shows how to manage system properties and user-defined metadata using the [Azure Storage client library for Python](/python/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md
# Manage blob properties and metadata with TypeScript + In addition to the data they contain, blobs support system properties and user-defined metadata. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
# Manage blob properties and metadata with .NET + In addition to the data they contain, blobs support system properties and user-defined metadata. This article shows how to manage system properties and user-defined metadata with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Tags Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md
# Use blob index tags to manage and find data with Java + This article shows how to use blob index tags to manage and find data using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). ## Prerequisites
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
# Use blob index tags to manage and find data with JavaScript + This article shows how to use blob index tags to manage and find data using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
# Use blob index tags to manage and find data with Python + This article shows how to use blob index tags to manage and find data using the [Azure Storage client library for Python](/python/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
# Use blob index tags to manage and find data with TypeScript + This article shows how to use blob index tags to manage and find data using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
# Use blob index tags to manage and find data with .NET + This article shows how to use blob index tags to manage and find data using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Use Access Tier Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md
The Azure SDK for .NET contains libraries that build on top of the Azure REST AP
### See also - [Access tiers best practices](access-tiers-best-practices.md)-- [Blob rehydration from the Archive tier](archive-rehydrate-overview.md)
+- [Blob rehydration from the archive tier](archive-rehydrate-overview.md)
storage Storage Blob Use Access Tier Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md
The Azure SDK for Java contains libraries that build on top of the Azure REST AP
### See also - [Access tiers best practices](access-tiers-best-practices.md)-- [Blob rehydration from the Archive tier](archive-rehydrate-overview.md)
+- [Blob rehydration from the archive tier](archive-rehydrate-overview.md)
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
Create a [BlobBatchClient](/javascript/api/@azure/storage-blob/blobbatchclient).
## Next steps - [Access tiers best practices](access-tiers-best-practices.md)-- [Blob rehydration from the Archive tier](archive-rehydrate-overview.md)
+- [Blob rehydration from the archive tier](archive-rehydrate-overview.md)
storage Storage Blob Use Access Tier Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-python.md
The Azure SDK for Python contains libraries that build on top of the Azure REST
### See also - [Access tiers best practices](access-tiers-best-practices.md)-- [Blob rehydration from the Archive tier](archive-rehydrate-overview.md)
+- [Blob rehydration from the archive tier](archive-rehydrate-overview.md)
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
Create a [BlobBatchClient](/javascript/api/@azure/storage-blob/blobbatchclient).
## Next steps - [Access tiers best practices](access-tiers-best-practices.md)-- [Blob rehydration from the Archive tier](archive-rehydrate-overview.md)
+- [Blob rehydration from the archive tier](archive-rehydrate-overview.md)
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
The following diagram shows how your data is replicated across availability zone
ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones are permanently affected. For protection against regional disasters, Microsoft recommends using [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a secondary region.
-The Archive tier for Blob Storage isn't currently supported for ZRS, GZRS, or RA-GZRS accounts. Unmanaged disks don't support ZRS or GZRS.
+The archive tier for Blob Storage isn't currently supported for ZRS, GZRS, or RA-GZRS accounts. Unmanaged disks don't support ZRS or GZRS.
For more information about which regions support ZRS, see [Azure regions with availability zones](../../availability-zones/az-overview.md#azure-regions-with-availability-zones).
The following table describes key parameters for each redundancy option:
| Parameter | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-| | Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) |
-| Availability for read requests | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool access tier) | At least 99.9% (99% for Cool or Archive access tiers) for GRS<br/><br/>At least 99.99% (99.9% for Cool or Archive access tiers) for RA-GRS | At least 99.9% (99% for Cool access tier) for GZRS<br/><br/>At least 99.99% (99.9% for Cool access tier) for RA-GZRS |
-| Availability for write requests | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool access tier) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool access tier) |
+| Availability for read requests | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool or archive access tiers) for GRS<br/><br/>At least 99.99% (99.9% for cool or archive access tiers) for RA-GRS | At least 99.9% (99% for cool access tier) for GZRS<br/><br/>At least 99.99% (99.9% for cool access tier) for RA-GZRS |
+| Availability for write requests | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) |
| Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region | For more information, see the [SLA for Storage Accounts](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
The following table shows which redundancy options are supported for each type o
All data for all storage accounts is copied from the primary to the secondary according to the redundancy option for the storage account. Objects including block blobs, append blobs, page blobs, queues, tables, and files are copied.
-Data in all tiers, including the Archive tier, is always copied from the primary to the secondary during geo-replication. The Archive tier for Blob Storage is currently supported for LRS, GRS, and RA-GRS accounts, but not for ZRS, GZRS, or RA-GZRS accounts. For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md).
+Data in all tiers, including the archive tier, is always copied from the primary to the secondary during geo-replication. The archive tier for Blob Storage is currently supported for LRS, GRS, and RA-GRS accounts, but not for ZRS, GZRS, or RA-GZRS accounts. For more information about blob tiers, see [Access tiers for blob data](../blobs/access-tiers-overview.md).
Unmanaged disks don't support ZRS or GZRS.
storage Storage Ref Azcopy Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-configuration-settings.md
The following table describes each environment variable and provides links to co
| AZCOPY_CONCURRENCY_VALUE | Specifies the number of concurrent requests that can occur. You can use this variable to increase throughput. If your computer has fewer than 5 CPUs, then the value of this variable is set to `32`. Otherwise, the default value is equal to 16 multiplied by the number of CPUs. The maximum default value of this variable is `3000`, but you can manually set this value higher or lower. See [Increase concurrency](storage-use-azcopy-optimize.md#increase-concurrency) | | AZCOPY_CONCURRENT_FILES | Overrides the (approximate) number of files that are in progress at any one time, by controlling how many files we concurrently initiate transfers for. | | AZCOPY_CONCURRENT_SCAN | Controls the (max) degree of parallelism used during scanning. Only affects parallelized enumerators, which include Azure Files/Blobs, and local file systems. |
-| AZCOPY_CONTENT_TYPE_MAP | Overrides one or more of the default MIME type mappings defined by your operating system. Set this variable to the path of a JSON file that defines any mapping. Here's the contents of an example JSON file: <br><br> {<br>&nbsp;&nbsp;"MIMETypeMapping": { <br>&nbsp;&nbsp;&nbsp;&nbsp;".323": "text/h323",<br>&nbsp;&nbsp;&nbsp;&nbsp;".aaf": "application/octet-stream",<br>&nbsp;&nbsp;&nbsp; ".aca": "application/octet-stream",<br>&nbsp;&nbsp;&nbsp;&nbsp;".accdb": "application/msaccess",<br>&nbsp;&nbsp;&nbsp;&nbsp; }<br>}
+| AZCOPY_CONTENT_TYPE_MAP | Overrides one or more of the default MIME type mappings defined by your operating system. Set this variable to the path of a JSON file that defines any mapping. Here's the contents of an example JSON file: <br><br> {<br>&nbsp;&nbsp;"MIMETypeMapping": { <br>&nbsp;&nbsp;&nbsp;&nbsp;".323": "text/h323",<br>&nbsp;&nbsp;&nbsp;&nbsp;".aaf": "application/octet-stream",<br>&nbsp;&nbsp;&nbsp; ".aca": "application/octet-stream",<br>&nbsp;&nbsp;&nbsp;&nbsp;".accdb": "application/msaccess"<br>&nbsp;&nbsp;&nbsp;&nbsp; }<br>}
| | AZCOPY_DEFAULT_SERVICE_API_VERSION | Overrides the service API version so that AzCopy could accommodate custom environments such as Azure Stack. | | AZCOPY_DISABLE_HIERARCHICAL_SCAN | Applies only when Azure Blobs is the source. Concurrent scanning is faster but employs the hierarchical listing API, which can result in more IOs/cost. Specify 'true' to sacrifice performance but save on cost. |
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
description: Plan for a deployment with Azure File Sync, a service that allows y
Previously updated : 02/03/2023 Last updated : 10/02/2023
The following table shows the interop state of NTFS file system features:
| Mount points | Partially supported | Mount points might be the root of a server endpoint, but they are skipped if they are contained in a server endpoint's namespace. | | Junctions | Skipped | For example, Distributed File System DfrsrPrivate and DFSRoots folders. | | Reparse points | Skipped | |
-| NTFS compression | Fully supported | |
+| NTFS compression | Partially supported | Azure File Sync does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. |
| Sparse files | Fully supported | Sparse files sync (are not blocked), but they sync to the cloud as a full file. If the file contents change in the cloud (or on another server), the file is no longer sparse when the change is downloaded. | | Alternate Data Streams (ADS) | Preserved, but not synced | For example, classification tags created by the File Classification Infrastructure are not synced. Existing classification tags on files on each of the server endpoints are left untouched. |
The following table shows the interop state of NTFS file system features:
| ~$\*.\* | Office temporary file | | \*.tmp | Temporary file | | \*.laccdb | Access DB locking file|
-| 635D02A9D91C401B97884B82B3BCDAEA.* | Internal Sync file|
+| 635D02A9D91C401B97884B82B3BCDAEA.* | Internal sync file|
| \\System Volume Information | Folder specific to volume | | $RECYCLE.BIN| Folder |
-| \\SyncShareState | Folder for Sync |
+| \\SyncShareState | Folder for sync |
+| .SystemShareInformation | Folder for sync in Azure file share |
+
+> [!Note]
+> While Azure File Sync supports syncing database files, databases are not a good workload for sync solutions (including Azure File Sync) since the log files and databases need to be synced together and they can get out of sync for various reasons which could lead to database corruption.
### Consider how much free space you need on your local disk When planning on using Azure File Sync, consider how much free space you need on the local disk you plan to have a server endpoint on.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 8/9/2023 Last updated : 10/2/2023
For more information on how to install and configure the Azure File Sync agent w
- The agent installation package must be installed with elevated (admin) permissions. - The agent is not supported on Nano Server deployment option. - The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022.
+- The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, the existing agent must be uninstalled, restart the server and install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information. - The Storage Sync Agent (FileSyncSvc) service does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.
storage File Sync Server Endpoint Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-endpoint-delete.md
description: Guidance on how to deprovision your Azure File Sync server endpoint
Previously updated : 6/01/2021 Last updated : 10/02/2023
Before you recall any files, make sure that you have enough free space locally t
Use the **Invoke-StorageSyncFileRecall** PowerShell cmdlet and specify the **SyncGroupName** parameter to recall all files. ```powershell
-Invoke-StorageSyncFileRecall -SyncGroupName "samplesyncgroupname"
+Invoke-StorageSyncFileRecall -SyncGroupName "samplesyncgroupname" -ThreadCount 4
``` Once this cmdlet has finished running, you can move onto the next section.
synapse-analytics Tutorial Horovod Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-pytorch.md
Title: 'Tutorial: Distributed training with Horovod and Pytorch'
+ Title: 'Tutorial: Distributed training with Horovod and PyTorch'
description: Tutorial on how to run distributed training with the Horovod Estimator and PyTorch
To ensure the Spark instance is shut down, end any connected sessions(notebooks)
## Next steps * [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning)
-* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
+* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
virtual-desktop Connect Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-macos.md
Title: Connect to Azure Virtual Desktop with the Remote Desktop client for macOS
description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for macOS. Previously updated : 05/06/2023 Last updated : 10/02/2023
Before you can access your resources, you'll need to meet the prerequisites:
- Internet access. -- A device running macOS 10.14 or later.
+- A device running macOS 11 or later.
- Download and install the Remote Desktop client from the [Mac App Store](https://apps.apple.com/app/microsoft-remote-desktop/id1295203466?mt=12).
virtual-desktop Whats New Client Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-macos.md
description: Learn about recent changes to the Remote Desktop client for macOS
Previously updated : 08/21/2023 Last updated : 10/02/2023 # What's new in the Remote Desktop client for macOS
virtual-desktop Whats New Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md
description: Learn about new and updated articles to the Azure Virtual Desktop d
Previously updated : 08/30/2023 Last updated : 02/09/2023 # What's new in documentation for Azure Virtual Desktop We update documentation for Azure Virtual Desktop regularly. In this article we highlight articles for new features and where there have been important updates to existing articles.
+## September 2023
+
+In September 2023, we published the following changes:
+
+- A new article to [Use Microsoft OneDrive with a RemoteApp](onedrive-remoteapp.md).
+- A new article to [Uninstall and reinstall Remote Desktop Connection](/windows-server/remote/remote-desktop-services/clients/uninstall-remote-desktop-connection) (MSTSC) on Windows 11 23H2.
+- A new article for [Azure Virtual Desktop (classic) retirement](virtual-desktop-fall-2019/classic-retirement.md).
+- Updated articles for custom images templates general availability:
+ - [Custom image templates](custom-image-templates.md).
+ - [Use Custom image templates to create custom images](create-custom-image-templates.md).
+ - [Troubleshoot Custom image templates](troubleshoot-custom-image-templates.md).
+- Updated [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md?tabs=monitor) for the general availability of using the Azure Monitor Agent with Azure Virtual Desktop Insights.
+ ## August 2023 In August 2023, we published the following changes:
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
You can run the [az vm image list --all](/cli/azure/vm/image) to see all of the
az vm image list --output table ```
-The output includes the image URN. You can also use the *UrnAlias*, which is a shortened version created for popular images like *Ubuntu2204*.
+The output includes the image URN. If you omit the `--all` option, you can see the *UrnAlias* for each image, if available. *UrnAlias* is a shortened version created for popular images like *Ubuntu2204*.
The Linux image alias names and their details outputted by this command are: ```output
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-generalized-image-version.md
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rg}/
1. For **Security type**, make sure *Standard* is selected. 1. For your **Image**, select **See all images**. The **Select an image** page opens. 1. In the left menu, under **Other Items**, select **Direct Shared Images (PREVIEW)**. The **Other Items | Direct Shared Images (PREVIEW)** page opens.
+1. The scope in this section is set to 'Subscription' by default, change the scope to 'Tenant' if you don't see the images and click outside the box to see the list of images shared to the entire Tenant.
1. Select an image from the list. Make sure that the **OS state** is *Generalized*. If you want to use a specialized image, see [Create a VM using a specialized image version](vm-specialized-image-version.md). Depending on the image you choose, the **Region** the VM will be created in will change to match the image. 1. Complete the rest of the options and then select the **Review + create** button at the bottom of the page. 1. On the **Create a virtual machine** page, you can see the details about the VM you're about to create. When you're ready, select **Create**. -
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
description: See answers to frequently asked questions about Azure Virtual WAN n
Previously updated : 06/30/2023 Last updated : 09/29/2023 # Customer intent: As someone with a networking background, I want to read more details about Virtual WAN in a FAQ format.
Virtual WAN partners provide automation for connectivity, which is the ability t
### Can you disable fully meshed hubs in a Virtual WAN?
-Virtual WAN comes in two flavors: Basic and Standard. In Basic Virtual WAN, hubs aren't meshed. In a Standard Virtual WAN, hubs are meshed and automatically connected when the virtual WAN is first set up. The user doesn't need to do anything specific. The user also doesn't have to disable or enable the functionality to obtain meshed hubs. Virtual WAN provides you many routing options to steer traffic between any spoke (VNet, VPN, or ExpressRoute). It provides the ease of fully meshed hubs, and also the flexibility of routing traffic per your needs.
+Virtual WAN comes in two flavors: Basic and Standard. In Basic Virtual WAN, hubs aren't meshed. In a Standard Virtual WAN, hubs are meshed and automatically connected when the virtual WAN is first set up. The user doesn't need to do anything specific. The user also doesn't have to disable or enable the functionality to obtain meshed hubs. Virtual WAN provides you with many routing options to steer traffic between any spoke (VNet, VPN, or ExpressRoute). It provides the ease of fully meshed hubs, and also the flexibility of routing traffic per your needs.
### How are Availability Zones and resiliency handled in Virtual WAN?
There are two options to add DNS servers for the P2S clients. The first method i
</azvpnprofile> ```
-### For User VPN (point-to-site)- how many clients are supported?
+### <a name="p2s-concurrent"></a>For User VPN (point-to-site)- how many clients are supported?
The table below describes the number of concurrent connections and aggregate throughput of the Point-to-site VPN gateway supported at different scale units.
Virtual WAN supports up to 20-Gbps aggregate throughput both for VPN and Express
A virtual network gateway VPN is limited to 30 tunnels. For connections, you should use Virtual WAN for large-scale VPN. You can connect up to 1,000 branch connections per virtual hub with aggregate of 20 Gbps per hub. A connection is an active-active tunnel from the on-premises VPN device to the virtual hub. You can also have multiple virtual hubs per region, which means you can connect more than 1,000 branches to a single Azure Region by deploying multiple Virtual WAN hubs in that Azure Region, each with its own site-to-site VPN gateway.
-### What is the recommended algorithm and Packets per second per site-to-site instance in Virtual WAN hub? How many tunnels is support per instance? What is the max throughput supported in a single tunnel?
+### <a name="packets"></a>What is the recommended algorithm and Packets per second per site-to-site instance in Virtual WAN hub? How many tunnels is support per instance? What is the max throughput supported in a single tunnel?
Virtual WAN supports 2 active site-to-site VPN gateway instances in a virtual hub. This means there are 2 active-active set of VPN gateway instances in a virtual hub. During maintenance operations, each instance is upgraded one by one due to which a user may experience brief decrease in aggregate throughput of a VPN gateway.
No. You can use any VPN-capable device that adheres to the Azure requirements fo
Software-defined connectivity solutions typically manage their branch devices using a controller, or a device provisioning center. The controller can use Azure APIs to automate connectivity to the Azure Virtual WAN. The automation includes uploading branch information, downloading the Azure configuration, setting up IPsec tunnels into Azure Virtual hubs, and automatically setting up connectivity from the branch device to Azure Virtual WAN. When you have hundreds of branches, connecting using Virtual WAN CPE Partners is easy because the onboarding experience takes away the need to set up, configure, and manage large-scale IPsec connectivity. For more information, see [Virtual WAN partner automation](virtual-wan-configure-automation-providers.md).
-### What if a device I'm using isn't in the Virtual WAN partner list? Can I still use it to connect to Azure Virtual WAN VPN?
+### <a name="device"></a>What if a device I'm using isn't in the Virtual WAN partner list? Can I still use it to connect to Azure Virtual WAN VPN?
Yes as long as the device supports IPsec IKEv1 or IKEv2. Virtual WAN partners automate connectivity from the device to Azure VPN end points. This implies automating steps such as 'branch information upload', 'IPsec and configuration' and 'connectivity'. Because your device isn't from a Virtual WAN partner ecosystem, you'll need to do the heavy lifting of manually taking the Azure configuration and updating your device to set up IPsec connectivity.
A connection from a branch or VPN device into Azure Virtual WAN is a VPN connect
An Azure Virtual WAN connection is composed of 2 tunnels. A Virtual WAN VPN gateway is deployed in a virtual hub in active-active mode, which implies that there are separate tunnels from on-premises devices terminating on separate instances. This is the recommendation for all users. However, if the user chooses to only have 1 tunnel to one of the Virtual WAN VPN gateway instances, if for any reason (maintenance, patches etc.) the gateway instance is taken offline, the tunnel will be moved to the secondary active instance and the user may experience a reconnect. BGP sessions won't move across instances.
-### What happens during a Gateway Reset in a Virtual WAN VPN gateway?
+### What happens during a gateway reset in a Virtual WAN VPN gateway?
The Gateway Reset button should be used if your on-premises devices are all working as expected, but the site-to-site VPN connection in Azure is in a Disconnected state. Virtual WAN VPN gateways are always deployed in an Active-Active state for high availability. This means there's always more than one instance deployed in a VPN gateway at any point of time. When the Gateway Reset button is used, it reboots the instances in the VPN gateway in a sequential manner so your connections aren't disrupted. There will be a brief gap as connections move from one instance to the other, but this gap should be less than a minute. Additionally, note that resetting the gateways won't change your Public IPs.
No. The spoke VNet can't have a Route Server if it's connected to the virtual WA
### Is there support for BGP in VPN connectivity?
-Yes, BGP is supported. When you create a VPN site, you can provide the BGP parameters in it. This will imply that any connections created in Azure for that site will be enabled for BGP.
+Yes, BGP is supported. When you create a VPN site, you can provide the BGP parameters in it. This implies that any connections created in Azure for that site will be enabled for BGP.
### Is there any licensing or pricing information for Virtual WAN?
Yes. For a list of Managed Service Provider (MSP) solutions enabled via Azure Ma
Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNet, on-premises branches with site-to-site VPN or SDWAN, remote users with point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNet along with other vWAN capabilities such as transit connectivity for VNet-to-VNet, transit connectivity between VPN and ExpressRoute, custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).
-### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure portal?
+### <a name="third-party-security"></a>If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure portal?
When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure portal.
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
[!INCLUDE [ExpressRoute Performance](../../includes/virtual-wan-expressroute-performance.md)]
-### <a name="why-am-i-seeing-a-message-and-button-called-update-router-to-latest-software-version-in-portal."></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
+### <a name="update-router"></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure portal. If the button isn't visible, please open a support case.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
There are several things to note with the virtual hub router upgrade:
virtual-wan Virtual Wan Point To Site Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-azure-ad.md
This section shows you how to add a gateway to an already existing virtual hub.
:::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/hub.png" alt-text="Screenshot shows the Edit virtual hub." lightbox="./media/virtual-wan-point-to-site-azure-ad/hub.png":::
- * **Gateway scale units**: Select the Gateway scale units. Scale units represent the aggregate capacity of the User VPN gateway. If you select 40 or more gateway scale units, plan your client address pool accordingly. For information about how this setting impacts the client address pool, see [About client address pools](about-client-address-pools.md). For information about gateway scale units, see the [FAQ](virtual-wan-faq.md#for-user-vpn-point-to-site--how-many-clients-are-supported).
+ * **Gateway scale units**: Select the Gateway scale units. Scale units represent the aggregate capacity of the User VPN gateway. If you select 40 or more gateway scale units, plan your client address pool accordingly. For information about how this setting impacts the client address pool, see [About client address pools](about-client-address-pools.md). For information about gateway scale units, see the [FAQ](virtual-wan-faq.md#p2s-concurrent).
* **User VPN configuration**: Select the configuration that you created earlier. * **User Groups to Address Pools Mapping**: For information about this setting, see [Configure user groups and IP address pools for P2S User VPNs (preview)](user-groups-create.md).