Updates from: 05/04/2023 01:10:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 03/06/2023 Last updated : 05/03/2023
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## April 2023
+
+### Updated articles
+
+- [Configure Transmit Security with Azure Active Directory B2C for passwordless authentication](partner-bindid.md) - Update partner-bindid.md
+- [Tutorial: Enable secure hybrid access for applications with Azure Active Directory B2C and F5 BIG-IP](partner-f5.md) - Update partner-f5.md
+ ## March 2023 ### Updated articles
active-directory Provision On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md
Previously updated : 01/23/2023 Last updated : 05/03/2023 zone_pivot_groups: app-provisioning-cross-tenant-synchronization
Use on-demand provisioning to provision a user or group in seconds. Among other
5. Select **Provision on demand**.
-6. Search for a user by first name, last name, display name, user principal name, or email address. Alternatively, you can search for a group and pick up to 5 users.
+6. Search for a user by first name, last name, display name, user principal name, or email address. Alternatively, you can search for a group and pick up to five users.
> [!NOTE] > For Cloud HR provisioning app (Workday/SuccessFactors to AD/Azure AD), the input value is different. > For Workday scenario, please provide "WorkerID" or "WID" of the user in Workday.
Use on-demand provisioning to provision a user or group in seconds. Among other
## Understand the provisioning steps
-The on-demand provisioning process attempts to show the steps that the provisioning service takes when provisioning a user. There are typically five steps to provision a user. One or more of those steps, explained in the following sections, will be shown during the on-demand provisioning experience.
+The on-demand provisioning process attempts to show the steps that the provisioning service takes when provisioning a user. There are typically five steps to provision a user. One or more of those steps, explained in the following sections, are shown during the on-demand provisioning experience.
### Step 1: Test connection
The **View details** section shows the scoping conditions that were evaluated. Y
#### Troubleshooting tips
-* Make sure that you've defined a valid scoping role. For example, avoid using the [Greater_Than operator](./define-conditional-rules-for-provisioning-user-accounts.md#create-a-scoping-filter) with a non-integer value.
+* Make sure that you've defined a valid scoping role. For example, avoid using the [Greater_Than operator](./define-conditional-rules-for-provisioning-user-accounts.md#create-a-scoping-filter) with a noninteger value.
* If the user doesn't have the necessary role, review the [tips for provisioning users assigned to the default access role](./application-provisioning-config-problem-no-users-provisioned.md#provisioning-users-assigned-to-the-default-access-role). ### Step 4: Match user between source and target
In this step, the service attempts to match the user that was retrieved in the i
#### View details
-The **View details** page shows the properties of the users that were matched in the target system. The properties that you see in the context pane vary as follows:
+The **View details** page shows the properties of the users that were matched in the target system. The context pane changes as follows:
-* If no users are matched in the target system, you won't see any properties.
-* If there's one user matched in the target system, you'll see the properties of that matched user from the target system.
-* If multiple users are matched, you'll see the properties of both matched users.
+* If no users are matched in the target system, no properties are shown.
+* If one user matches in the target system, the properties of that user are shown.
+* If multiple users match, the properties of both users are shown.
* If multiple matching attributes are part of your attribute mappings, each matching attribute is evaluated sequentially and the matched users for that attribute are shown. #### Troubleshooting tips
The **View details** section displays the attributes that were modified in the t
#### Troubleshooting tips * Failures for exporting changes can vary greatly. Check the [documentation for provisioning logs](../reports-monitoring/concept-provisioning-logs.md#error-codes) for common failures.
-* On-demand provisioning says the group or user can't be provisioned because they're not assigned to the application. Note that there's a replicate delay of up to a few minutes between when an object is assigned to an application and that assignment being honored by on-demand provisioning. You may need to wait a few minutes and try again.
+* On-demand provisioning says the group or user can't be provisioned because they're not assigned to the application. There's a replication delay of up to a few minutes between when an object is assigned to an application and when that assignment is honored in on-demand provisioning. You may need to wait a few minutes and try again.
## Frequently asked questions
-* **Do you need to turn provisioning off to use on-demand provisioning?** For applications that use a long-lived bearer token or a user name and password for authorization, no additional steps are required. Applications that use OAuth for authorization currently require the provisioning job to be stopped before using on-demand provisioning. Applications such as G Suite, Box, Workplace by Facebook, and Slack fall into this category. Work is in progress to support on-demand provisioning for all applications without having to stop provisioning jobs.
+* **Do you need to turn provisioning off to use on-demand provisioning?** For applications that use a long-lived bearer token or a user name and password for authorization, no more steps are required. Applications that use OAuth for authorization currently require the provisioning job to be stopped before using on-demand provisioning. Applications such as G Suite, Box, Workplace by Facebook, and Slack fall into this category. Work is in progress to support on-demand provisioning for all applications without having to stop provisioning jobs.
* **How long does on-demand provisioning take?** On-demand provisioning typically takes less than 30 seconds.
There are currently a few known limitations to on-demand provisioning. Post your
> [!NOTE] > The following limitations are specific to the on-demand provisioning capability. For information about whether an application supports provisioning groups, deletions, or other capabilities, check the tutorial for that application.
-* On-demand provisioning of groups supports updating up to 5 members at a time
+* On-demand provisioning of groups supports updating up to five members at a time
::: zone-end
-* Restoring a previously soft-deleted user in the target tenant with on-demand provisioning isn't supported. If you try to soft delete a user with on-demand provisioning and then restore the user, it can result in duplicate users.
+* Restoring a previously soft-deleted user in the target tenant with on-demand provisioning isn't supported. If you try to soft-delete a user with on-demand provisioning and then restore the user, it can result in duplicate users.
* On-demand provisioning of roles isn't supported.
-* On-demand provisioning supports disabling users that have been unassigned from the application. However, it doesn't support disabling or deleting users that have been disabled or deleted from Azure AD. Those users won't appear when you search for a user.
-* On-demand provisioning does not support nested groups that are not directly assigned to the application.
+* On-demand provisioning supports disabling users that have been unassigned from the application. However, it doesn't support disabling or deleting users that have been disabled or deleted from Azure AD. Those users don't appear when you search for a user.
+* On-demand provisioning doesn't support nested groups that aren't directly assigned to the application.
## Next steps
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Title: Skip deletion of out of scope users in Azure Active Directory Application Provisioning
-description: Learn how to override the default behavior of de-provisioning out of scope users in Azure Active Directory.
+description: Learn how to override the default behavior of deprovisioning out of scope users in Azure Active Directory.
Previously updated : 01/23/2023 Last updated : 05/03/2023
By default, the Azure AD provisioning engine soft deletes or disables users that go out of scope. However, for certain scenarios like Workday to AD User Inbound Provisioning, this behavior may not be the expected and you may want to override this default behavior. This article describes how to use the Microsoft Graph API and the Microsoft Graph API explorer to set the flag ***SkipOutOfScopeDeletions*** that controls the processing of accounts that go out of scope.
-* If ***SkipOutOfScopeDeletions*** is set to 0 (false), accounts that go out of scope will be disabled in the target.
-* If ***SkipOutOfScopeDeletions*** is set to 1 (true), accounts that go out of scope won't be disabled in the target. This flag is set at the *Provisioning App* level and can be configured using the Graph API.
+* If ***SkipOutOfScopeDeletions*** is set to 0 (false), accounts that go out of scope are disabled in the target.
+* If ***SkipOutOfScopeDeletions*** is set to 1 (true), accounts that go out of scope aren't disabled in the target. This flag is set at the *Provisioning App* level and can be configured using the Graph API.
-Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md). Note that in order to successfully complete this procedure you must have first set up app provisioning for the app. Each app has its own configuration article. For example, to configure the Workday application, see [Tutorial: Configure Workday to Azure AD user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md).
+Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md). To successfully complete this procedure, you must have first set up app provisioning for the app. Each app has its own configuration article. For example, to configure the Workday application, see [Tutorial: Configure Workday to Azure AD user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md).
## Step 1: Retrieve your Provisioning App Service Principal ID (Object ID)
-1. Launch the [Azure portal](https://portal.azure.com), and navigate to the Properties section of your provisioning application. For e.g. if you want to export your *Workday to AD User Provisioning application* mapping navigate to the Properties section of that app.
-1. In the Properties section of your provisioning app, copy the GUID value associated with the *Object ID* field. This value is also called the **ServicePrincipalId** of your App and it will be used in Graph Explorer operations.
+1. Launch the [Azure portal](https://portal.azure.com), and navigate to the Properties section of your provisioning application. For example, if you want to export your *Workday to AD User Provisioning application* mapping navigate to the Properties section of that app.
+1. In the Properties section of your provisioning app, copy the GUID value associated with the *Object ID* field. This value is also called the **ServicePrincipalId** of your app and it's used in Graph Explorer operations.
![Screenshot of Workday App Service Principal ID.](./media/skip-out-of-scope-deletions/wd_export_01.png)
Because this configuration is widely used with the *Workday to Active Directory
![Screenshot of Microsoft Graph Explorer Sign-in.](./media/skip-out-of-scope-deletions/wd_export_02.png)
-1. Upon successful sign-in, you'll see the user account details in the left-hand pane.
+1. Upon successful sign-in, the user account details appear in the left-hand pane.
## Step 3: Get existing app credentials and connectivity details
In the Microsoft Graph Explorer, run the following GET query replacing [serviceP
![Screenshot of GET job query.](./media/skip-out-of-scope-deletions/skip-03.png)
-Copy the Response into a text file. It will look like the JSON text shown below, with values highlighted in yellow specific to your deployment. Add the lines highlighted in green to the end and update the Workday connection password highlighted in blue.
+Copy the Response into a text file. It looks like the JSON text shown, with values highlighted in yellow specific to your deployment. Add the lines highlighted in green to the end and update the Workday connection password highlighted in blue.
![Screenshot of GET job response.](./media/skip-out-of-scope-deletions/skip-04.png)
Here's the JSON block to add to the mapping.
## Step 4: Update the secrets endpoint with the SkipOutOfScopeDeletions flag
-In the Graph Explorer, run the command below to update the secrets endpoint with the ***SkipOutOfScopeDeletions*** flag.
+In the Graph Explorer, run the command to update the secrets endpoint with the ***SkipOutOfScopeDeletions*** flag.
-In the URL below replace [servicePrincipalId] with the **ServicePrincipalId** extracted from the [Step 1](#step-1-retrieve-your-provisioning-app-service-principal-id-object-id).
+In the URL, replace [servicePrincipalId] with the **ServicePrincipalId** extracted from the [Step 1](#step-1-retrieve-your-provisioning-app-service-principal-id-object-id).
```http PUT https://graph.microsoft.com/beta/servicePrincipals/[servicePrincipalId]/synchronization/secrets
You should get the output as "Success ΓÇô Status Code 204". If you receive an er
## Step 5: Verify that out of scope users donΓÇÖt get disabled
-You can test this flag results in expected behavior by updating your scoping rules to skip a specific user. In the example below, we're excluding the employee with ID 21173 (who was earlier in scope) by adding a new scoping rule:
+You can test this flag results in expected behavior by updating your scoping rules to skip a specific user. In the example, we're excluding the employee with ID 21173 (who was earlier in scope) by adding a new scoping rule:
![Screenshot that shows the "Add Scoping Filter" section with an example user highlighted.](./media/skip-out-of-scope-deletions/skip-07.png)
-In the next provisioning cycle, the Azure AD provisioning service will identify that the user 21173 has gone out of scope and if the SkipOutOfScopeDeletions property is enabled, then the synchronization rule for that user will display a message as shown below:
+In the next provisioning cycle, the Azure AD provisioning service identifies that the user 21173 has gone out of scope. If the `SkipOutOfScopeDeletions` property is enabled, then the synchronization rule for that user displays a message as shown:
![Screenshot of scoping example.](./media/skip-out-of-scope-deletions/skip-08.png)
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
# Customer intent: As an identity administrator, I want to understand how to use phone authentication methods in Azure AD to improve and secure user sign-in events. + # Authentication methods in Azure Active Directory - phone options For direct authentication using text message, you can [Configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md). SMS-based sign-in is great for Frontline workers. With SMS-based sign-in, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
If you have problems with phone authentication for Azure AD, review the followin
* Have the user attempt to log in using a wi-fi connection by installing the Authenticator app. * Or, use SMS authentication instead of phone (voice) authentication.
+* Phone number is blocked and unable to be used for Voice MFA
+
+ - There are a few country codes blocked for voice MFA unless your Azure AD administrator has opted in for those country codes. Have your Azure AD administrator opt-in to receive MFA for those country codes.
+
+ - Or, use Microsoft Authenticator instead of voice authentication.
+ ## Next steps To get started, see the [tutorial for self-service password reset (SSPR)][tutorial-sspr] and [Azure AD Multi-Factor Authentication][tutorial-azure-mfa].
Learn more about configuring authentication methods using the [Microsoft Graph R
<!-- INTERNAL LINKS --> [tutorial-sspr]: tutorial-enable-sspr.md+ [tutorial-azure-mfa]: tutorial-enable-azure-mfa.md+ [concept-sspr]: concept-sspr-howitworks.md+ [concept-mfa]: concept-mfa-howitworks.md++
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
An authentication strength Conditional Access policy works together with [MFA tr
- **Using 'Require one of the selected controls' with 'require authentication strength' control** - After you select authentication strengths grant control and additional controls, all the selected controls must be satisfied in order to gain access to the resource. Using **Require one of the selected controls** isn't applicable, and will default to requiring all the controls in the policy. -- **Authentication loop** - When the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app ("Sign in with your credentials")](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c)- ## Limitations
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 03/06/2023 Last updated : 05/03/2023
# Combined security information registration for Azure Active Directory overview
-Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for multifactor authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both multifactor authentication and SSPR. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ)
+Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for multifactor authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both multifactor authentication and SSPR. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ).
![My Account showing registered Security info for a user](media/concept-registration-mfa-sspr-combined/combined-security-info-defaults-registered.png)
Combined registration supports the authentication methods and actions in the fol
| FIDO2 security keys*| Yes | No | Yes | > [!NOTE]
-> <b>Alternate phone</b> can only be registered in *manage mode* on the [Security info](https://mysignins.microsoft.com/security-info) page and requires Voice calls to be enabled in the Authentication methods policy. <br />
-> <b>Office phone</b> can only be registered in *Interrupt mode* if the users *Business phone* property has been set. Office phone can be added by users in *Managed mode from the [Security info](https://mysignins.microsoft.com/security-info)* without this requirement. <br />
-> <b>App passwords</b> are available only to users who have been enforced for per-user MFA. App passwords aren't available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy. <br />
-> <b>FIDO2 security keys</b>, can only be added in *manage mode only* on the [Security info](https://mysignins.microsoft.com/security-info) page.
+> If you enable Microsoft Authenticator for passwordless authentication mode in the Authentication methods policy, users need to also enable passwordless sign-in in the Authenticator app.
+>
+> Alternate phone can only be registered in *Manage mode* on [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo) and requires Voice calls to be enabled in the Authentication methods policy.
+>
+> Office phone can only be registered in *Interrupt mode* if the users *Business phone* property has been set. Office phone can be added by users in *Managed mode* from [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo) without this requirement.
+>
+> App passwords are available only to users who have been enforced for per-user MFA. App passwords aren't available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy.
+>
+> FIDO2 security keys, can only be added in *Manage mode* on [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo).
Users can set one of the following options as the default multifactor authentication method.
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
Previously updated : 01/25/2023 Last updated : 04/29/2023
In Azure Active Directory (Azure AD), there's a password policy that defines settings like the password complexity, length, or age. There's also a policy that defines acceptable characters and length for usernames.
-When self-service password reset (SSPR) is used to change or reset a password in Azure AD, the password policy is checked. If the password doesn't meet the policy requirements, the user is prompted to try again. Azure administrators have some restrictions on using SSPR that are different to regular user accounts.
+When self-service password reset (SSPR) is used to change or reset a password in Azure AD, the password policy is checked. If the password doesn't meet the policy requirements, the user is prompted to try again. Azure administrators have some restrictions on using SSPR that are different to regular user accounts, and there are minor exceptions for trial and free versions of Azure AD.
-This article describes the password policy settings and complexity requirements associated with user accounts in your Azure AD tenant, and how you can use PowerShell to check or set password expiration settings.
+This article describes the password policy settings and complexity requirements associated with user accounts. It also covers how to use PowerShell to check or set password expiration settings.
## Username policies
-Every account that signs in to Azure AD must have a unique user principal name (UPN) attribute value associated with their account. In hybrid environments with an on-premises Active Directory Domain Services (AD DS) environment synchronized to Azure AD using Azure AD Connect, by default the Azure AD UPN is set to the on-prem UPN.
+Every account that signs in to Azure AD must have a unique user principal name (UPN) attribute value associated with their account. In hybrid environments with an on-premises Active Directory Domain Services (AD DS) environment synchronized to Azure AD using Azure AD Connect, by default the Azure AD UPN is set to the on-premises UPN.
The following table outlines the username policies that apply to both on-premises AD DS accounts that are synchronized to Azure AD, and for cloud-only user accounts created directly in Azure AD:
The following table outlines the username policies that apply to both on-premise
A password policy is applied to all user accounts that are created and managed directly in Azure AD. Some of these password policy settings can't be modified, though you can [configure custom banned passwords for Azure AD password protection](tutorial-configure-custom-password-protection.md) or account lockout parameters.
-By default, an account is locked out after 10 unsuccessful sign-in attempts with the wrong password. The user is locked out for one minute. Further incorrect sign-in attempts lock out the user for increasing durations of time. [Smart lockout](howto-password-smart-lockout.md) tracks the last three bad password hashes to avoid incrementing the lockout counter for the same password. If someone enters the same bad password multiple times, this behavior will not cause the account to lock out. You can define the smart lockout threshold and duration.
+By default, an account is locked out after 10 unsuccessful sign-in attempts with the wrong password. The user is locked out for one minute. Further incorrect sign-in attempts lock out the user for increasing durations of time. [Smart lockout](howto-password-smart-lockout.md) tracks the last three bad password hashes to avoid incrementing the lockout counter for the same password. If someone enters the same bad password multiple times, they won't get locked out. You can define the smart lockout threshold and duration.
The Azure AD password policy doesn't apply to user accounts synchronized from an on-premises AD DS environment using Azure AD Connect, unless you enable *EnforceCloudPasswordPolicyForPasswordSyncedUsers*.
The following Azure AD password policy options are defined. Unless noted, you ca
| | | | Characters allowed |A ΓÇô Z<br>a - z<br>0 ΓÇô 9<br>@ # $ % ^ & * - _ ! + = [ ] { } &#124; \ : ' , . ? / \` ~ " ( ) ; < ><br>Blank space | | Characters not allowed | Unicode characters |
-| Password restrictions |A minimum of 8 characters and a maximum of 256 characters.<br>Requires three out of four of the following:<br>- Lowercase characters<br>- Uppercase characters<br>- Numbers (0-9)<br>- Symbols (see the previous password restrictions) |
+| Password restrictions |A minimum of 8 characters and a maximum of 256 characters.<br>Requires three out of four of the following types of characters:<br>- Lowercase characters<br>- Uppercase characters<br>- Numbers (0-9)<br>- Symbols (see the previous password restrictions) |
| Password expiry duration (Maximum password age) |Default value: **90** days. If the tenant was created after 2021, it has no default expiration value. You can check current policy with [Get-MsolPasswordPolicy](/powershell/module/msonline/get-msolpasswordpolicy).<br>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell.| | Password expiry (Let passwords never expire) |Default value: **false** (indicates that passwords have an expiration date).<br>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet. | | Password change history | The last password *can't* be used again when the user changes a password. |
The following Azure AD password policy options are defined. Unless noted, you ca
By default, administrator accounts are enabled for self-service password reset, and a strong default *two-gate* password reset policy is enforced. This policy may be different from the one you have defined for your users, and this policy can't be changed. You should always test password reset functionality as a user without any Azure administrator roles assigned.
-With a two-gate policy, administrators don't have the ability to use security questions.
+The two-gate policy requires two pieces of authentication data, such as an email address, authenticator app, or a phone number, and it prohibits security questions. Office and mobile voice calls are also prohibited for trial or free versions of Azure AD.
-The two-gate policy requires two pieces of authentication data, such as an email address, authenticator app, or a phone number. A two-gate policy applies in the following circumstances:
+A two-gate policy applies in the following circumstances:
* All the following Azure administrator roles are affected: * Application administrator
A one-gate policy requires one piece of authentication data, such as an email ad
## Password expiration policies
-A *global administrator* or *user administrator* can use the [Microsoft Azure AD Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire.
+A *Global Administrator* or *User Administrator* can use the [Microsoft Azure AD Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire.
You can also use PowerShell cmdlets to remove the never-expires configuration or to see which user passwords are set to never expire.
After the module is installed, use the following steps to complete each task as
### Check the expiration policy for a password
-1. Open a PowerShell prompt and [connect to your Azure AD tenant](/powershell/module/azuread/connect-azuread#examples) using a *global administrator* or *user administrator* account.
+1. Open a PowerShell prompt and [connect to your Azure AD tenant](/powershell/module/azuread/connect-azuread#examples) using a *Global Administrator* or *User Administrator* account.
1. Run one of the following commands for either an individual user or for all users:
After the module is installed, use the following steps to complete each task as
### Set a password to expire
-1. Open a PowerShell prompt and [connect to your Azure AD tenant](/powershell/module/azuread/connect-azuread#examples) using a *global administrator* or *user administrator* account.
+1. Open a PowerShell prompt and [connect to your Azure AD tenant](/powershell/module/azuread/connect-azuread#examples) using a *Global Administrator* or *User Administrator* account.
1. Run one of the following commands for either an individual user or for all users:
After the module is installed, use the following steps to complete each task as
### Set a password to never expire
-1. Open a PowerShell prompt and [connect to your Azure AD tenant](/powershell/module/azuread/connect-azuread#examples) using a *global administrator* or *user administrator* account.
+1. Open a PowerShell prompt and [connect to your Azure AD tenant](/powershell/module/azuread/connect-azuread#examples) using a *Global Administrator* or *User Administrator* account.
1. Run one of the following commands for either an individual user or for all users: * To set the password of one user to never expire, run the following cmdlet. Replace `<user ID>` with the user ID of the user you want to check, such as *driley\@contoso.onmicrosoft.com*
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 04/05/2023 Last updated : 05/01/2023 -+
The MFA Server Migration Utility helps synchronize multifactor authentication da
After the authentication data is migrated to Azure AD, users can perform cloud-based MFA seamlessly without having to register again or confirm authentication methods. Admins can use the MFA Server Migration Utility to target single users or groups of users for testing and controlled rollout without having to make any tenant-wide changes.
+## Video: How to use the MFA Server Migration Utility
+
+Take a look at our video for an overview of the MFA Server Migration Utility and how it works.
+
+>[!VIDEO https://www.microsoft.com/videoplayer/embed/RW11N1N]
+ ## Limitations and requirements - The MFA Server Migration Utility requires a new build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You donΓÇÖt have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically.
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
You must also meet the following system requirements:
- Devices must be running Windows 10 version 2004 or later. -- Your Windows Server domain controllers must have patches installed for the following servers:
+- Your Windows Server domain controllers must run Windows Server 2016 or later and have patches installed for the following servers:
- [Windows Server 2016](https://support.microsoft.com/help/4534307/windows-10-update-kb4534307) - [Windows Server 2019](https://support.microsoft.com/help/4534321/windows-10-update-kb4534321)
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
One of the user attributes that's automatically synchronized by Azure AD Connect
Email as an alternate login ID applies to [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) under a "bring your own sign-in identifiers" model. When email as an alternate login ID is enabled in the home tenant, Azure AD users can perform guest sign in with non-UPN email on the resource tenant endpoint. No action is required from the resource tenant to enable this functionality.
+> [!NOTE]
+> When an alternate login ID is used on a resource tenant endpoint that does not have the functionality enabled, the sign-in process will work seamlessly, but SSO will be interrupted.
+ ## Enable user sign-in with an email address > [!NOTE]
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
To install the Azure AD Password Protection proxy service, complete the followin
Registration of the Azure AD Password Protection proxy service is necessary only once in the lifetime of the service. After that, the Azure AD Password Protection proxy service will automatically perform any other necessary maintenance.
-1. To make sure that the changes have taken effect, run `Test-AzureADPasswordProtectionDCAgentHealth -TestAll`. For help resolving errors, see [Troubleshoot: On-premises Azure AD Password Protection](howto-password-ban-bad-on-premises-troubleshoot.md).
+1. To make sure that the changes have taken effect, run `Test-AzureADPasswordProtectionProxyHealth -TestAll`. For help resolving errors, see [Troubleshoot: On-premises Azure AD Password Protection](howto-password-ban-bad-on-premises-troubleshoot.md).
1. Now register the on-premises Active Directory forest with the necessary credentials to communicate with Azure by using the `Register-AzureADPasswordProtectionForest` PowerShell cmdlet.
To install the Azure AD Password Protection proxy service, complete the followin
For `Register-AzureADPasswordProtectionForest` to succeed, at least one DC running Windows Server 2012 or later must be available in the Azure AD Password Protection proxy server's domain. The Azure AD Password Protection DC agent software doesn't have to be installed on any domain controllers prior to this step.
-1. To make sure that the changes have taken effect, run `Test-AzureADPasswordProtectionDCAgentHealth -TestAll`. For help resolving errors, see [Troubleshoot: On-premises Azure AD Password Protection](howto-password-ban-bad-on-premises-troubleshoot.md).
+1. To make sure that the changes have taken effect, run `Test-AzureADPasswordProtectionProxyHealth -TestAll`. For help resolving errors, see [Troubleshoot: On-premises Azure AD Password Protection](howto-password-ban-bad-on-premises-troubleshoot.md).
### Configure the proxy service to communicate through an HTTP proxy
active-directory Single Sign On Macos Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-macos-ios.md
Previously updated : 11/23/2022 Last updated : 05/03/2023
This type of SSO works between multiple apps distributed by the same Apple Devel
- [SSO through Authentication broker](#sso-through-authentication-broker-on-ios)
-The SSO through authentication broker isn't available on macOS.
-
-Microsoft provides apps called brokers, that enable SSO between applications from different vendors as long as the mobile device is registered with Azure Active Directory (Azure AD). This type of SSO requires a broker application be installed on the user's device.
+Microsoft provides apps called brokers that enable SSO between applications from different vendors as long as the mobile device is registered with Azure Active Directory (Azure AD). This type of SSO requires a broker application be installed on the user's device.
- **SSO between MSAL and Safari**
This type of SSO is currently not available on macOS. MSAL on macOS only support
- **Silent SSO between ADAL and MSAL macOS/iOS apps**
-MSAL Objective-C supports migration and SSO with ADAL Objective-C-based apps. The apps must be distributed by the same Apple Developer.
+MSAL Objective-C support migration and SSO with ADAL Objective-C-based apps. The apps must be distributed by the same Apple Developer.
See [SSO between ADAL and MSAL apps on macOS and iOS](sso-between-adal-msal-apps-macos-ios.md) for instructions for cross-app SSO between ADAL and MSAL-based apps.
active-directory Spa Quickstart Portal Angular Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-angular-ciam.md
Last updated 05/05/2023
> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] > 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/). >
-> 1. Unzip the sample, `cd` into the folder that contains `package.json`, then run the following commands:
+> 1. Unzip the sample app, `cd` into the folder that contains `package.json`, then run the following commands:
> ```console > npm install && npm start > ```
active-directory Spa Quickstart Portal React Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-react-ciam.md
Last updated 05/05/2023
> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] > 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/). >
-> 1. Unzip the sample, `cd` into the folder that contains `package.json`, then run the following commands:
+> 1. Unzip the sample app, `cd` into the folder that contains `package.json`, then run the following commands:
> ```console > npm install && npm start > ```
active-directory Spa Quickstart Portal Vanilla Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-vanilla-js-ciam.md
Last updated 05/05/2023
> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] > 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/). >
-> 1. Unzip the sample, `cd` into the app root folder, then run the following commands:
+> 1. Unzip the sample app, `cd` into the app root folder, then run the following command:
> ```console
-> cd App && npm install && npm start
+> npm install && npm start
> ``` > 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts. >
active-directory Web App Quickstart Portal Dotnet Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-dotnet-ciam.md
Last updated 05/05/2023
> In this quickstart, you download and run a code sample that demonstrates how ASP.NET web app can sign in users with Azure Active Directory for customers. > > [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
-> 1. Make sure you've installed Make sure you've installed [.NET SDK v7](https://dotnet.microsoft.com/download/dotnet/7.0) or later.
+> 1. Make sure you've installed [.NET SDK v7](https://dotnet.microsoft.com/download/dotnet/7.0) or later.
>
-> 1. Unzip the sample, `cd` into the app root folder, then run the following command:
+> 1. Unzip the sample app, `cd` into the app root folder, then run the following command:
> ```console > dotnet run > ```
active-directory Web App Quickstart Portal Node Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md
Last updated 05/05/2023
> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] > 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/). >
-> 1. Unzip the sample, `cd` into the folder that contains `package.json`, then run the following commands:
+> 1. Unzip the sample app, `cd` into the folder that contains `package.json`, then run the following command:
> ```console > npm install && npm start > ```
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 04/20/2023 Last updated : 05/03/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on April 20th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on May 3rd, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Phone Standard_USGOV_GCCHIGH | MCOEV_USGOV_GCCHIGH | 985fcb26-7b94-475b-b512-89356697be71 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft Teams Phone Resoure Account | PHONESYSTEM_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | Microsoft 365 Phone Standard Resource Account (f47330e9-c134-43b3-9993-e7f004506889)| | Microsoft Teams Phone Resource Account for GCC | PHONESYSTEM_VIRTUALUSER_GOV | 2cf22bcb-0c9e-4bc6-8daf-7e7654c0f285 | MCOEV_VIRTUALUSER_GOV (0628a73f-3b4a-4989-bd7b-0f8823144313) | Microsoft 365 Phone Standard Resource Account for Government (0628a73f-3b4a-4989-bd7b-0f8823144313) |
-| Microsoft Teams Premium | Microsoft_Teams_Premium | 989a1621-93bc-4be0-835c-fe30171d6463 | MICROSOFT_ECDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>TEAMSPRO_MGMT (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>TEAMSPRO_CUST (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>TEAMSPRO_PROTECTION (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>TEAMSPRO_VIRTUALAPPT (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>MCO_VIRTUAL_APPT (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>TEAMSPRO_WEBINAR (78b58230-ec7e-4309-913c-93a45cc4735b) | Microsoft eCDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>Microsoft Teams Premium Intelligent (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>Microsoft Teams Premium Personalized (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>Microsoft Teams Premium Secure (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>Microsoft Teams Premium Virtual Appointment (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>Microsoft Teams Premium Virtual Appointments (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>Microsoft Teams Premium Webinar (78b58230-ec7e-4309-913c-93a45cc4735b) |
| Microsoft Teams Premium Introductory Pricing | Microsoft_Teams_Premium | 36a0f3b3-adb5-49ea-bf66-762134cf063a | MICROSOFT_ECDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>TEAMSPRO_MGMT (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>TEAMSPRO_CUST (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>TEAMSPRO_PROTECTION (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>TEAMSPRO_VIRTUALAPPT (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>MCO_VIRTUAL_APPT (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>TEAMSPRO_WEBINAR (78b58230-ec7e-4309-913c-93a45cc4735b) | Microsoft eCDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>Microsoft Teams Premium Intelligent (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>Microsoft Teams Premium Personalized (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>Microsoft Teams Premium Secure (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>Microsoft Teams Premium Virtual Appointment (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>Microsoft Teams Premium Virtual Appointments (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>Microsoft Teams Premium Webinar (78b58230-ec7e-4309-913c-93a45cc4735b) | | Microsoft Teams Rooms Basic | Microsoft_Teams_Rooms_Basic | 6af4b3d6-14bb-4a2a-960c-6c902aad34f3 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Basic without Audio Conferencing | Microsoft_Teams_Rooms_Basic_without_Audio_Conferencing | 50509a35-f0bd-4c5e-89ac-22f0e16a00f8 | TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
This property indicates the relationship of the user to the host tenancy. This p
### Identities
-This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers, which can be viewed by selecting the link next to **Identities** in the userΓÇÖs profile or by querying the `onPremisesSyncEnabled` property via the Microsoft Graph API.
+This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers, which can be viewed by selecting the link next to **Identities** in the userΓÇÖs profile or by querying the `identities` property via the Microsoft Graph API.
> [!NOTE] > Identities and UserType are independent properties. A value of Identities does not imply a particular value for UserType
active-directory Cloudflare Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-azure-ad-integration.md
Title: Secure hybrid access with Azure AD and Cloudflare
+ Title: Configure Cloudflare with Azure Active Directory for secure hybrid access
description: In this tutorial, learn how to integrate Cloudflare with Azure AD for secure hybrid access
Previously updated : 6/27/2022 Last updated : 05/02/2023
# Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access
-In this tutorial, learn how to integrate Azure Active Directory
-(Azure AD) with Cloudflare Zero Trust. Using this solution, you can build rules based on user identity and group membership. Users can authenticate with their Azure AD credentials and connect to Zero Trust protected applications.
+In this tutorial, learn to integrate Azure Active Directory (Azure AD) with Cloudflare Zero Trust. Build rules based on user identity and group membership. Users authenticate with Azure AD credentials and connect to Zero Trust protected applications.
## Prerequisites
-To get started, you need:
--- An Azure AD subscription-
- - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/).
--- An Azure AD tenant linked to your Azure AD subscription-
- - See, [Quickstart: Create a new tenant in Azure Active Directory](../fundamentals/active-directory-access-create-new-tenant.md).
--- A Cloudflare Zero Trust account-
- - If you don't have one, go to [Get started with Cloudflare's Zero Trust
- platform](https://dash.cloudflare.com/sign-up/teams)
+* An Azure AD subscription
+ * If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/)
+* An Azure AD tenant linked to the Azure AD subscription
+ * See, [Quickstart: Create a new tenant in Azure AD](../fundamentals/active-directory-access-create-new-tenant.md)
+* A Cloudflare Zero Trust account
+ * If you don't have one, go to [Get started with Cloudflare's Zero Trust platform](https://dash.cloudflare.com/sign-up/teams)
## Integrate organization identity providers with Cloudflare Access
-Cloudflare Zero Trust Access helps enforce default-deny, Zero Trust
-rules that limit access to corporate applications, private IP spaces,
-and hostnames. This feature connects users faster and safer than a virtual private network (VPN).
+Cloudflare Zero Trust Access helps enforce default-deny, Zero Trust rules that limit access to corporate applications, private IP spaces, and hostnames. This feature connects users faster and safer than a virtual private network (VPN). Organizations can use multiple identity providers (IdPs), reducing friction when working with partners or contractors.
-Organizations can use multiple Identity Providers (IdPs) simultaneously, reducing friction when working with partners
-or contractors.
+To add an IdP as a sign-in method, sign in to Cloudflare on the [Cloudflare sign in page](https://dash.teams.cloudflare.com/) and Azure AD.
-To add an IdP as a sign-in method, configure [Cloudflare Zero Trust
-dashboard](https://dash.teams.cloudflare.com/) and Azure
-AD.
+The following architecture diagram shows the integration.
-The following architecture diagram shows the implementation.
-
-![Screenshot shows the architecture diagram of Cloudflare and Azure AD integration](./media/cloudflare-azure-ad-integration/cloudflare-architecture-diagram.png)
+ ![Diagram of the Cloudflare and Azure AD integration architecture.](./media/cloudflare-azure-ad-integration/cloudflare-architecture-diagram.png)
## Integrate a Cloudflare Zero Trust account with Azure AD
-To integrate Cloudflare Zero Trust account with an instance of Azure AD:
-
-1. On the [Cloudflare Zero Trust
- dashboard](https://dash.teams.cloudflare.com/),
- navigate to **Settings > Authentication**.
+Integrate Cloudflare Zero Trust account with an instance of Azure AD.
-2. For **Login methods**, select **Add new**.
+1. Sign in to the Cloudflare Zero Trust dashboard on the [Cloudflare sign in page](https://dash.teams.cloudflare.com/).
+2. Navigate to **Settings**.
+3. Select **Authentication**.
+4. For **Login methods**, select **Add new**.
- ![Screenshot shows adding new login methods](./media/cloudflare-azure-ad-integration/login-methods.png)
+ ![Screenshot of the Login methods option on Authentication.](./media/cloudflare-azure-ad-integration/login-methods.png)
-3. Under **Select an identity provider**, select **Azure AD.**
+5. Under **Select an identity provider**, select **Azure AD.**
- ![Screenshot shows selecting a new identity provider](./media/cloudflare-azure-ad-integration/idp-azure-ad.png)
+ ![Screenshot of the Azure AD option under Select an identity provider.](./media/cloudflare-azure-ad-integration/idp-azure-ad.png)
-4. The **Add Azure ID** dialog appears. Enter credentials from your Azure AD instance and make necessary selections.
+6. The **Add Azure ID** dialog appears.
+7. Enter Azure AD instance credentials and make needed selections.
- ![Screenshot shows making selections to Azure AD dialog box](./media/cloudflare-azure-ad-integration/add-azure-ad-as-idp.png)
+ ![Screenshot of options and selections for Add Azure AD.](./media/cloudflare-azure-ad-integration/add-azure-ad-as-idp.png)
-5. Select **Save**.
+8. Select **Save**.
## Register Cloudflare with Azure AD Use the instructions in the following three sections to register Cloudflare with Azure AD. 1. Sign in to the [Azure portal](https://portal.azure.com/).- 2. Under **Azure Services**, select **Azure Active Directory**.- 3. In the left menu, under **Manage**, select **App registrations**.
+4. Select the **+ New registration** tab.
+5. Enter an application **Name**
+6. Enter a team name with **callback** at the end of the path. For example, `https://<your-team-name>.cloudflareaccess.com/cdn-cgi/access/callback`
+7. Select **Register**.
-4. Select the **+ New registration tab**.
-
-5. Name your application and enter your [team
- domain](https://developers.cloudflare.com/cloudflare-one/glossary#team-domain), with **callback** at the end of the path: /cdn-cgi/access/callback.
- For example, `https://<your-team-name>.cloudflareaccess.com/cdn-cgi/access/callback`
-
-6. Select **Register**.
+See the [team domain](https://developers.cloudflare.com/cloudflare-one/glossary#team-domain) definition in the Cloudflare Glossary.
- ![Screenshot shows registering an application](./media/cloudflare-azure-ad-integration/register-application.png)
+ ![Screenshot of options and selections for Register an application.](./media/cloudflare-azure-ad-integration/register-application.png)
### Certificates & secrets
-1. On the **Cloudflare Access** screen, under **Essentials**, copy and save the Application (client) ID and the Directory (tenant) ID.
+1. On the **Cloudflare Access** screen, under **Essentials**, copy and save the Application (Client) ID and the Directory (Tenant) ID.
- [ ![Screenshot shows cloudflare access screen](./media/cloudflare-azure-ad-integration/cloudflare-access.png) ](./media/cloudflare-azure-ad-integration/cloudflare-access.png#lightbox)
+ [![Screenshot of the Cloudflare Access screen.](./media/cloudflare-azure-ad-integration/cloudflare-access.png)](./media/cloudflare-azure-ad-integration/cloudflare-access.png#lightbox)
-2. In the left menu, under **Manage**, select **Certificates &
- secrets**.
- ![Screenshot shows Azure AD certificates and secrets screen](./media/cloudflare-azure-ad-integration/add-client-secret.png)
-3. Under **Client secrets**, select **+ New client secret**.
+2. In the left menu, under **Manage**, select **Certificates & secrets**.
-4. In **Description**, name the client secret.
+ ![Screenshot of the certificates and secrets screen.](./media/cloudflare-azure-ad-integration/add-client-secret.png)
+3. Under **Client secrets**, select **+ New client secret**.
+4. In **Description**, enter the Client Secret.
5. Under **Expires**, select an expiration.- 6. Select **Add**.
+7. Under **Client secrets**, from the **Value** field, copy the value. Consider the value an application password. The example value appears, Azure values appear in the Cloudflare Access configuration.
-7. Under **Client secrets**, from the **Value** field, copy the value. Consider the value an application password. This example's value is visible, Azure values appear in the Cloudflare Access configuration.
-
- ![Screenshot shows cloudflare access configuration for Azure AD](./media/cloudflare-azure-ad-integration/cloudflare-access-configuration.png)
+ ![Screenshot of Client secrets input.](./media/cloudflare-azure-ad-integration/cloudflare-access-configuration.png)
### Permissions 1. In the left menu, select **API permissions**.-
-2. Select **+** **Add a permission**.
-
+2. Select **+ Add a permission**.
3. Under **Select an API**, select **Microsoft Graph**.
- ![Screenshot shows Azure AD API permissions using MS Graph](./media/cloudflare-azure-ad-integration/microsoft-graph.png)
+ ![Screenshot of the Microsoft Graph option under Request API permissions.](./media/cloudflare-azure-ad-integration/microsoft-graph.png)
4. Select **Delegated permissions** for the following permissions: -- `Email`--- `openid`--- `profile`
+ * Email
+ * openid
+ * profile
+ * offline_access
+ * user.read
+ * directory.read.all
+ * group.read.all
-- `offline_access` -- `user.read`
+5. Under **Manage**, select **+ Add permissions**.
-- `directory.read.all`
+ [![Screenshot options and selections for Request API permissions.](./media/cloudflare-azure-ad-integration/request-api-permissions.png)](./media/cloudflare-azure-ad-integration/request-api-permissions.png#lightbox)
-- `group.read.all`-
-5. Under **Manage**, select **+** **Add permissions**.
-
- [ ![Screenshot shows Azure AD request API permissions screen](./media/cloudflare-azure-ad-integration/request-api-permissions.png) ](./media/cloudflare-azure-ad-integration/request-api-permissions.png#lightbox)
6. Select **Grant Admin Consent for ...**.
- [ ![Screenshot shows configured API permissions with granting admin consent](./media/cloudflare-azure-ad-integration/grant-admin-consent.png) ](./media/cloudflare-azure-ad-integration/grant-admin-consent.png#lightbox)
+ [![Screenshot of configured permissions under API permissions.](./media/cloudflare-azure-ad-integration/grant-admin-consent.png)](./media/cloudflare-azure-ad-integration/grant-admin-consent.png#lightbox)
-7. On the [Cloudflare Zero Trust dashboard](https://dash.teams.cloudflare.com/),
- navigate to **Settings> Authentication**.
+7. On the Cloudflare Zero Trust dashboard, navigate to **Settings > Authentication**.
8. Under **Login methods**, select **Add new**.- 9. Select **Azure AD**.-
-10. Enter the Application ID, Application secret, and Directory ID values.
-
- >[!NOTE]
- >For Azure AD groups, in **Edit your Azure AD identity provider**, for **Support Groups** select **On**.
-
+10. Enter values for **Application ID**, **Application Secret**, and **Directory ID**.
11. Select **Save**.
-## Test the integration
+ >[!NOTE]
+ >For Azure AD groups, in **Edit your Azure AD identity provider**, for **Support Groups** select **On**.
-1. To test the integration on the Cloudflare Zero Trust dashboard,
- navigate to **Settings** > **Authentication**.
+## Test the integration
+1. On the Cloudflare Zero Trust dashboard, navigate to **Settings** > **Authentication**.
2. Under **Login methods**, for Azure AD select **Test**.
- ![Screenshot shows Azure AD as the login method for test](./media/cloudflare-azure-ad-integration/login-methods-test.png)
+ ![Screenshot of login methods.](./media/cloudflare-azure-ad-integration/login-methods-test.png)
3. Enter Azure AD credentials.- 4. The **Your connection works** message appears.
- ![Screenshot shows Your connection works screen](./media/cloudflare-azure-ad-integration/connection-success-screen.png)
+ ![Screenshot of the Your connection works message.](./media/cloudflare-azure-ad-integration/connection-success-screen.png)
-## Next steps
-- [Integrate single sign-on (SSO) with Cloudflare](https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/)
+## Next steps
-- [Cloudflare integration with Azure AD B2C](../../active-directory-b2c/partner-cloudflare.md)
+- Go to developer.cloudflare.com for [Integrate SSO](https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/)
+- [Tutorial: Configure Cloudflare Web Application Firewall with Azure AD B2C](../../active-directory-b2c/partner-cloudflare.md)
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
Previously updated : 04/05/2023 Last updated : 05/02/2023
You can detect inactive accounts by evaluating the `lastSignInDateTime` property
- `https://graph.microsoft.com/v1.0/users?$filter=signInActivity/lastSignInDateTime le 2019-06-01T00:00:00Z` > [!NOTE]
-> When you request the `signInActivity` property while listing users, the maximum page size is 120 users. Requests with $top set higher than 120 will fail. The `signInActivity` property supports `$filter` (`eq`, `ne`, `not`, `ge`, `le`) *but not with any other filterable properties*.
+> The `signInActivity` property supports `$filter` (`eq`, `ne`, `not`, `ge`, `le`) *but not with any other filterable properties*. You must specify `$select=signInActivity` or `$filter=signInActivity` while [listing users](/graph/api/user-list?view=graph-rest-beta&preserve-view=true), as the signInActivity property is not returned by default.
### What you need to know
The following details relate to the `lastSignInDateTime` property.
- Each interactive sign-in attempt results in an update of the underlying data store. Typically, sign-ins show up in the related sign-in report within 6 hours. -- To generate a `lastSignInDateTime` timestamp, you must attempt a sign-in. Either a failed or successful sign-in attempt, as long as it is recorded in the [Azure AD sign-in logs](concept-all-sign-ins.md), will generate a `lastSignInDateTime` timestamp. The value of the `lastSignInDateTime` property may be blank if:
+- To generate a `lastSignInDateTime` timestamp, you must attempt a sign-in. Either a failed or successful sign-in attempt, as long as it's recorded in the [Azure AD sign-in logs](concept-all-sign-ins.md), generates a `lastSignInDateTime` timestamp. The value of the `lastSignInDateTime` property may be blank if:
- The last attempted sign-in of a user took place before April 2020. - The affected user account was never used for a sign-in attempt.
The following details relate to the `lastSignInDateTime` property.
## How to investigate a single user
-If you need to view the latest sign-in activity for a user you can view the user's sign-in details in Azure AD. You can also use the Microsoft Graph **users by name** scenario described in the [previous section](#detect-inactive-user-accounts-with-microsoft-graph).
+If you need to view the latest sign-in activity for a user, you can view the user's sign-in details in Azure AD. You can also use the Microsoft Graph **users by name** scenario described in the [previous section](#detect-inactive-user-accounts-with-microsoft-graph).
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to **Azure AD** > **Users** > select a user from the list.
If you need to view the latest sign-in activity for a user you can view the user
![Screenshot of the user overview page with the sign-in activity tile highlighted.](media/howto-manage-inactive-user-accounts/last-sign-activity-tile.png)
-The last sign-in date and time shown on this tile may take up to 24 hours to update, which means the date and time may not be current. If you need to see the activity in near real time, select the **See all sign-ins** link on the **Sign-ins** tile to view all sign-in activity for that user.
+The last sign-in date and time shown on this tile may take up to 6 hours to update, which means the date and time may not be current. If you need to see the activity in near real time, select the **See all sign-ins** link on the **Sign-ins** tile to view all sign-in activity for that user.
## Next steps
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
az provider register --namespace Microsoft.ContainerService
If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node OS auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel values, make sure the [cluster auto-upgrade channel][Autoupgrade] isn't `node-image`.
-The nodeosupgradechannel isn't supported on Mariner and Windows OS nodepools.
+The nodeosupgradechannel isn't supported on Windows OS nodepools. Mariner support is now rolled out and is expected to be available in all regions soon.
## Using node OS auto-upgrade
The following upgrade channels are available:
||| | `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates|N/A| | `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially and will be patched at some point by the OS's infrastructure|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows and Mariner don't apply security patches automatically, so this option behaves equivalently to `None`|
-| `SecurityPatch`|AKS updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. There maybe disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|N/A|
+| `SecurityPatch`|AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only". There maybe disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|N/A|
| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.| To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
The full `kube-proxy` configuration structure can be found in the [AKS Cluster S
- `mode` - can be set to `IPTABLES` or `IPVS`. Defaults to `IPTABLES`. - `ipvsConfig` - if `mode` is `IPVS`, this object contains IPVS-specific configuration properties. - `scheduler` - which connection scheduler to utilize. Supported values:
- - `LeastConnections` - sends connections to the backend pod with the fewest connections
+ - `LeastConnection` - sends connections to the backend pod with the fewest connections
- `RoundRobin` - distributes connections evenly between backend pods - `tcpFinTimeoutSeconds` - the value used for timeout after a FIN has been received in a TCP session - `tcpTimeoutSeconds` - the value used for timeout length for idle TCP sessions - `udpTimeoutSeconds` - the value used for timeout length for idle UDP sessions > [!NOTE]
-> IPVS load balancing operates in each node independently and is still only aware of connections flowing through the local node. This means that while `LeastConnections` results in more even load under higher number of connections, when low numbers of connections (# connects < 2 * node count) occur traffic may still be relatively unbalanced.
+> IPVS load balancing operates in each node independently and is still only aware of connections flowing through the local node. This means that while `LeastConnection` results in more even load under higher number of connections, when low numbers of connections (# connects < 2 * node count) occur traffic may still be relatively unbalanced.
## Utilize `kube-proxy` configuration in a new or existing AKS cluster using Azure CLI
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
Previously updated : 10/26/2022 Last updated : 05/02/2023 # Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)
-By default, AKS clusters use [kubenet][kubenet], and an Azure virtual network and subnet are created for you. With *kubenet*, nodes get an IP address from the Azure virtual network subnet. Pods receive an IP address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address. This approach greatly reduces the number of IP addresses that you need to reserve in your network space for pods to use.
+AKS clusters use [kubenet][kubenet] and create an Azure virtual network and subnet for you by default. With kubenet, nodes get an IP address from the Azure virtual network subnet. Pods receive an IP address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address. This approach greatly reduces the number of IP addresses you need to reserve in your network space for pods to use.
-With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow. You can configure the maximum pods deployable to a node at cluster create time or when creating new node pools. If you don't specify maxPods when creating new node pools, you receive a default value of 110 for kubenet.
+With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow. You can configure the maximum pods deployable to a node at cluster creation time or when creating new node pools. If you don't specify `maxPods` when creating new node pools, you receive a default value of *110* for kubenet.
-This article shows you how to use *kubenet* networking to create and use a virtual network subnet for an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts].
+This article shows you how to use kubenet networking to create and use a virtual network subnet for an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts].
## Prerequisites * The virtual network for the AKS cluster must allow outbound internet connectivity. * Don't create more than one AKS cluster in the same subnet.
-* AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range. This range can't be updated after you create your cluster.
-* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) role on the subnet within your virtual network. CLI helps do the role assignment automatically. If you are using ARM template or other clients, the role assignment needs to be done manually. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
+* AKS clusters can't use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range. The range can't be updated after you create your cluster.
+* The cluster identity used by the AKS cluster must at least have the [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) role on the subnet within your virtual network. CLI helps set the role assignment automatically. If you're using an ARM template or other clients, you need to manually set the role assignment. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you want to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, you need the following permissions:
* `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read` > [!WARNING]
-> To use Windows Server node pools, you must use Azure CNI. The use of kubenet as the network model is not available for Windows Server containers.
+> To use Windows Server node pools, you must use Azure CNI. The kubenet network model isn't available for Windows Server containers.
## Before you begin
You need the Azure CLI version 2.0.65 or later installed and configured. Run `a
## Overview of kubenet networking with your own subnet
-In many environments, you have defined virtual networks and subnets with allocated IP address ranges. These virtual network resources are used to support multiple services and applications. To provide network connectivity, AKS clusters can use *kubenet* (basic networking) or Azure CNI (*advanced networking*).
+In many environments, you have defined virtual networks and subnets with allocated IP address ranges, and you use these resources to support multiple services and applications. To provide network connectivity, AKS clusters can use *kubenet* (basic networking) or Azure CNI (*advanced networking*).
-With *kubenet*, only the nodes receive an IP address in the virtual network subnet. Pods can't communicate directly with each other. Instead, User Defined Routing (UDR) and IP forwarding is used for connectivity between pods across nodes. By default, UDRs and IP forwarding configuration is created and maintained by the AKS service, but you have the option to [bring your own route table for custom route management][byo-subnet-route-table]. You could also deploy pods behind a service that receives an assigned IP address and load balances traffic for the application. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:
+With *kubenet*, only the nodes receive an IP address in the virtual network subnet. Pods can't communicate directly with each other. Instead, User Defined Routing (UDR) and IP forwarding handle connectivity between pods across nodes. UDRs and IP forwarding configuration is created and maintained by the AKS service by default, but you can [bring your own route table for custom route management][byo-subnet-route-table] if you want. You can also deploy pods behind a service that receives an assigned IP address and load balances traffic for the application. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:
![Kubenet network model with an AKS cluster](media/use-kubenet/kubenet-overview.png)
-Azure supports a maximum of 400 routes in a UDR, so you can't have an AKS cluster larger than 400 nodes. AKS [Virtual Nodes][virtual-nodes] and Azure Network Policies aren't supported with *kubenet*. You can use [Calico Network Policies][calico-network-policies], as they are supported with kubenet.
+Azure supports a maximum of *400* routes in a UDR, so you can't have an AKS cluster larger than 400 nodes. AKS [virtual nodes][virtual-nodes] and Azure Network Policies aren't supported with *kubenet*. [Calico Network Policies][calico-network-policies] are supported.
-With *Azure CNI*, each pod receives an IP address in the IP subnet, and can directly communicate with other pods and services. Your clusters can be as large as the IP address range you specify. However, the IP address range must be planned in advance, and all of the IP addresses are consumed by the AKS nodes based on the maximum number of pods that they can support. Advanced network features and scenarios such as [Virtual Nodes][virtual-nodes] or Network Policies (either Azure or Calico) are supported with *Azure CNI*.
+With *Azure CNI*, each pod receives an IP address in the IP subnet and can communicate directly with other pods and services. Your clusters can be as large as the IP address range you specify. However, you must plan the IP address range in advance, and all the IP addresses are consumed by the AKS nodes based on the maximum number of pods they can support. Advanced network features and scenarios such as [virtual nodes][virtual-nodes] or Network Policies (either Azure or Calico) are supported with *Azure CNI*.
### Limitations & considerations for kubenet
With *Azure CNI*, each pod receives an IP address in the IP subnet, and can dire
* Route tables and user-defined routes are required for using kubenet, which adds complexity to operations. * Direct pod addressing isn't supported for kubenet due to kubenet design. * Unlike Azure CNI clusters, multiple kubenet clusters can't share a subnet.
-* AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more details, see [Network security groups][aks-network-nsg].
+* AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more details, see [Network security groups][aks-network-nsg].
* Features **not supported on kubenet** include:
- * [Azure network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy), but Calico network policies are supported on kubenet
- * [Windows node pools](./windows-faq.md)
- * [Virtual nodes add-on](virtual-nodes.md#network-requirements)
+ * [Azure network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy)
+ * [Windows node pools](./windows-faq.md)
+ * [Virtual nodes add-on](virtual-nodes.md#network-requirements)
### IP address availability and exhaustion
-With *Azure CNI*, a common issue is the assigned IP address range is too small to then add additional nodes when you scale or upgrade a cluster. The network team may also not be able to issue a large enough IP address range to support your expected application demands.
+A common issue with *Azure CNI* is that the assigned IP address range is too small to then add more nodes when you scale or upgrade a cluster. The network team also might not be able to issue a large enough IP address range to support your expected application demands.
-As a compromise, you can create an AKS cluster that uses *kubenet* and connect to an existing virtual network subnet. This approach lets the nodes receive defined IP addresses, without the need to reserve a large number of IP addresses up front for all of the potential pods that could run in the cluster.
-
-With *kubenet*, you can use a much smaller IP address range and be able to support large clusters and application demands. For example, even with a */27* IP address range on your subnet, you could run a 20-25 node cluster with enough room to scale or upgrade. This cluster size would support up to *2,200-2,750* pods (with a default maximum of 110 pods per node). The maximum number of pods per node that you can configure with *kubenet* in AKS is 110.
+As a compromise, you can create an AKS cluster that uses *kubenet* and connect to an existing virtual network subnet. This approach lets the nodes receive defined IP addresses without the need to reserve a large number of IP addresses up front for any potential pods that could run in the cluster. With *kubenet*, you can use a much smaller IP address range and support large clusters and application demands. For example, with a */27* IP address range on your subnet, you can run a 20-25 node cluster with enough room to scale or upgrade. This cluster size can support up to *2,200-2,750* pods (with a default maximum of 110 pods per node). The maximum number of pods per node that you can configure with *kubenet* in AKS is 250.
The following basic calculations compare the difference in network models: -- **kubenet** - a simple */24* IP address range can support up to *251* nodes in the cluster (each Azure virtual network subnet reserves the first three IP addresses for management operations)
- - This node count could support up to *27,610* pods (with a default maximum of 110 pods per node with *kubenet*)
-- **Azure CNI** - that same basic */24* subnet range could only support a maximum of *8* nodes in the cluster
- - This node count could only support up to *240* pods (with a default maximum of 30 pods per node with *Azure CNI*)
+* **kubenet**: A simple */24* IP address range can support up to *251* nodes in the cluster. Each Azure virtual network subnet reserves the first three IP addresses for management operations. This node count can support up to *27,610* pods, with a default maximum of 110 pods per node.
+* **Azure CNI**: That same basic */24* subnet range can only support a maximum of *eight* nodes in the cluster. This node count can only support up to *240* pods, with a default maximum of 30 pods per node).
> [!NOTE]
-> These maximums don't take into account upgrade or scale operations. In practice, you can't run the maximum number of nodes that the subnet IP address range supports. You must leave some IP addresses available for use during scale or upgrade operations.
+> These maximums don't take into account upgrade or scale operations. In practice, you can't run the maximum number of nodes the subnet IP address range supports. You must leave some IP addresses available for scaling or upgrading operations.
### Virtual network peering and ExpressRoute connections
-To provide on-premises connectivity, both *kubenet* and *Azure-CNI* network approaches can use [Azure virtual network peering][vnet-peering] or [ExpressRoute connections][express-route]. Plan your IP address ranges carefully to prevent overlap and incorrect traffic routing. For example, many on-premises networks use a *10.0.0.0/8* address range that is advertised over the ExpressRoute connection. It's recommended to create your AKS clusters into Azure virtual network subnets outside of this address range, such as *172.16.0.0/16*.
+To provide on-premises connectivity, both *kubenet* and *Azure-CNI* network approaches can use [Azure virtual network peering][vnet-peering] or [ExpressRoute connections][express-route]. Plan your IP address ranges carefully to prevent overlap and incorrect traffic routing. For example, many on-premises networks use a *10.0.0.0/8* address range that's advertised over the ExpressRoute connection. We recommend creating your AKS clusters in Azure virtual network subnets outside this address range, such as *172.16.0.0/16*.
### Choose a network model to use
-The choice of which network plugin to use for your AKS cluster is usually a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
+The following considerations help outline when each network model may be the most appropriate:
-Use *kubenet* when:
+**Use *kubenet* when**:
-- You have limited IP address space.-- Most of the pod communication is within the cluster.-- You don't need advanced AKS features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
+* You have limited IP address space.
+* Most of the pod communication is within the cluster.
+* You don't need advanced AKS features, such as virtual nodes or Azure Network Policy.
-Use *Azure CNI* when:
+***Use *Azure CNI* when**:
-- You have available IP address space.-- Most of the pod communication is to resources outside of the cluster.-- You don't want to manage user defined routes for pod connectivity.-- You need AKS advanced features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
+* You have available IP address space.
+* Most of the pod communication is to resources outside of the cluster.
+* You don't want to manage user defined routes for pod connectivity.
+* You need AKS advanced features, such as virtual nodes or Azure Network Policy.
For more information to help you decide which network model to use, see [Compare network models and their support scope][network-comparisons]. ## Create a virtual network and subnet
-To get started with using *kubenet* and your own virtual network subnet, first create a resource group using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+1. Create a resource group using the [`az group create`][az-group-create] command.
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus
-```
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location eastus
+ ```
-If you don't have an existing virtual network and subnet to use, create these network resources using the [az network vnet create][az-network-vnet-create] command. In the following example, the virtual network is named *myAKSVnet* with the address prefix of *192.168.0.0/16*. A subnet is created named *myAKSSubnet* with the address prefix *192.168.1.0/24*.
+2. If you don't have an existing virtual network and subnet to use, create these network resources using the [`az network vnet create`][az-network-vnet-create] command. The following example command creates a virtual network named *myAKSVnet* with the address prefix of *192.168.0.0/16* and a subnet named *myAKSSubnet* with the address prefix *192.168.1.0/24*:
-```azurecli-interactive
-az network vnet create \
- --resource-group myResourceGroup \
- --name myAKSVnet \
- --address-prefixes 192.168.0.0/16 \
- --subnet-name myAKSSubnet \
- --subnet-prefix 192.168.1.0/24
-```
+ ```azurecli-interactive
+ az network vnet create \
+ --resource-group myResourceGroup \
+ --name myAKSVnet \
+ --address-prefixes 192.168.0.0/16 \
+ --subnet-name myAKSSubnet \
+ --subnet-prefix 192.168.1.0/24
+ ```
-Get the subnet resource ID and store as a variable:
+3. Get the subnet resource ID using the [`az network vnet subnet show`][az-network-vnet-subnet-show] command and store it as a variable named `SUBNET_ID` for later use.
-```azurecli-interactive
-SUBNET_ID=$(az network vnet subnet show --resource-group myResourceGroup --vnet-name myAKSVnet --name myAKSSubnet --query id -o tsv)
-```
+ ```azurecli-interactive
+ SUBNET_ID=$(az network vnet subnet show --resource-group myResourceGroup --vnet-name myAKSVnet --name myAKSSubnet --query id -o tsv)
+ ```
## Create an AKS cluster in the virtual network
-Now create an AKS cluster in your virtual network and subnet using the [az aks create][az-aks-create] command.
- ### Create an AKS cluster with system-assigned managed identities
-You can create an AKS cluster using a system-assigned managed identity by running the following CLI command.
- > [!NOTE]
-> When using system-assigned identity, azure-cli will grant Network Contributor role to the system-assigned identity after the cluster is created.
-> If you are using an ARM template or other clients, you need to use the [user-assigned managed identity][Create an AKS cluster with user-assigned managed identities]
-
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --network-plugin kubenet \
- --service-cidr 10.0.0.0/16 \
- --dns-service-ip 10.0.0.10 \
- --pod-cidr 10.244.0.0/16 \
- --docker-bridge-address 172.17.0.1/16 \
- --vnet-subnet-id $SUBNET_ID
-```
-* The *--service-cidr* is optional. This address is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection. The default value is 10.0.0.0/16.
-
-* The *--dns-service-ip* is optional. The address should be the *.10* address of your service IP address range. The default value is 10.0.0.10.
-
-* The *--pod-cidr* is optional. This address should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection. The default value is 10.244.0.0/16.
- * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed if you need more addresses for additional nodes.
+> When using system-assigned identity, azure-cli grants the Network Contributor role to the system-assigned identity after the cluster is created. If you're using an ARM template or other clients, you need to use the [user-assigned managed identity][Create an AKS cluster with user-assigned managed identities] instead.
+
+* Create an AKS cluster with system-assigned managed identities using the [`az aks create`][az-aks-create] command.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --network-plugin kubenet \
+ --service-cidr 10.0.0.0/16 \
+ --dns-service-ip 10.0.0.10 \
+ --pod-cidr 10.244.0.0/16 \
+ --docker-bridge-address 172.17.0.1/16 \
+ --vnet-subnet-id $SUBNET_ID
+ ```
+
+ Deployment parameters:
+
+ * *--service-cidr* is optional. This address is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection. The default value is 10.0.0.0/16.
+ * *--dns-service-ip* is optional. The address should be the *.10* address of your service IP address range. The default value is 10.0.0.10.
+ * *--pod-cidr* is optional. This address should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection. The default value is 10.244.0.0/16.
+ * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed.
* The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*. * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
+ * *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16.
-* The *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16.
+> [!NOTE]
+> If you want to enable an AKS cluster to include a [Calico network policy][calico-network-policies], you can use the following command:
+>
+> ```azurecli-interactive
+> az aks create \
+> --resource-group myResourceGroup \
+> --name myAKSCluster \
+> --node-count 3 \
+> --network-plugin kubenet --network-policy calico \
+> --vnet-subnet-id $SUBNET_ID
+> ```
-> [!Note]
-> If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
+### Create an AKS cluster with user-assigned managed identities
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 3 \
- --network-plugin kubenet --network-policy calico \
- --vnet-subnet-id $SUBNET_ID
-```
+#### Create a managed identity
-### Create an AKS cluster with user-assigned managed identities
+* Create a managed identity using the [`az identity`][az-identity-create] command. If you have an existing managed identity, find the Principal ID using the `az identity show --ids <identity-resource-id>` command instead.
+
+ ```azurecli-interactive
+ az identity create --name myIdentity --resource-group myResourceGroup
+ ```
-#### Create or obtain a managed identity
-
-If you don't have a managed identity, you should create one by running the [az identity][az-identity-create] command.
-
-```azurecli-interactive
-az identity create --name myIdentity --resource-group myResourceGroup
-```
-
-The output should resemble the following:
-
-```output
-{
- "clientId": "<client-id>",
- "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
- "location": "westus2",
- "name": "myIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
-
-If you have an existing managed identity, you can find the Principal ID by running the following command:
-
-```azurecli-interactive
-az identity show --ids <identity-resource-id>
-```
-
-The output should resemble the following:
-
-```output
-{
- "clientId": "<client-id>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
- "location": "eastus",
- "name": "myIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
+ Your output should resemble the following example output:
+
+ ```output
+ {
+ "clientId": "<client-id>",
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "location": "westus2",
+ "name": "myIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ }
+ ```
#### Add role assignment for managed identity
-If you are using Azure CLI, the role will be added automatically and you can skip this step. If you are using an ARM template or other clients, you need to use the Principal ID of the cluster managed identity to perform a role assignment.
+If you're using the Azure CLI, the role is automatically added and you can skip this step. If you're using an ARM template or other clients, you need to use the Principal ID of the cluster managed identity to perform a role assignment.
-To assign the correct delegations in the remaining steps, use the [az network vnet show][az-network-vnet-show] and [az network vnet subnet show][az-network-vnet-subnet-show] commands to get the required resource IDs. These resource IDs are stored as variables and referenced in the remaining steps:
+* Get the virtual network resource ID using the [`az network vnet show`][az-network-vnet-show] command and store it as a variable named `VNET_ID`.
-```azurecli-interactive
-VNET_ID=$(az network vnet show --resource-group myResourceGroup --name myAKSVnet --query id -o tsv)
-```
+ ```azurecli-interactive
+ VNET_ID=$(az network vnet show --resource-group myResourceGroup --name myAKSVnet --query id -o tsv)
+ ```
-Now assign the managed identity for your AKS cluster *Network Contributor* permissions on the virtual network using the [az role assignment create][az-role-assignment-create] command. Provide the *\<principalId>* as shown in the output from the previous command to create the identity:
+* Assign the managed identity for your AKS cluster *Network Contributor* permissions on the virtual network using the [`az role assignment create`][az-role-assignment-create] command and provide the *\<principalId>*.
-```azurecli-interactive
-az role assignment create --assignee <control-plane-identity-principal-id> --scope $VNET_ID --role "Network Contributor"
-```
+ ```azurecli-interactive
+ az role assignment create --assignee <control-plane-identity-principal-id> --scope $VNET_ID --role "Network Contributor"
-Example:
-```azurecli-interactive
-az role assignment create --assignee 22222222-2222-2222-2222-222222222222 --scope "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myAKSVnet" --role "Network Contributor"
-```
+ # Example command
+ az role assignment create --assignee 22222222-2222-2222-2222-222222222222 --scope "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myAKSVnet" --role "Network Contributor"
+ ```
> [!NOTE] > Permission granted to your cluster's managed identity used by Azure may take up 60 minutes to populate. #### Create an AKS cluster
-Now you can create an AKS cluster using the user-assigned managed identity by running the following CLI command. Provide the control plane identity resource ID via `assign-identity`
+* Create an AKS cluster using the [`az aks create`][az-aks-create] command and provide the control plane identity resource ID via `assign-identity` to assign the user-assigned managed identity.
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 3 \
- --network-plugin kubenet \
- --vnet-subnet-id $SUBNET_ID \
- --assign-identity <identity-resource-id>
-```
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 3 \
+ --network-plugin kubenet \
+ --vnet-subnet-id $SUBNET_ID \
+ --assign-identity <identity-resource-id>
+ ```
When you create an AKS cluster, a network security group and route table are automatically created. These network resources are managed by the AKS control plane. The network security group is automatically associated with the virtual NICs on your nodes. The route table is automatically associated with the virtual network subnet. Network security group rules and route tables are automatically updated as you create and expose services. ## Bring your own subnet and route table with kubenet
-With kubenet, a route table must exist on your cluster subnet(s). AKS supports bringing your own existing subnet and route table.
-
-If your custom subnet does not contain a route table, AKS creates one for you and adds rules to it throughout the cluster lifecycle. If your custom subnet contains a route table when you create your cluster, AKS acknowledges the existing route table during cluster operations and adds/updates rules accordingly for cloud provider operations.
+With kubenet, a route table must exist on your cluster subnet(s). AKS supports bringing your own existing subnet and route table. If your custom subnet doesn't contain a route table, AKS creates one for you and adds rules throughout the cluster lifecycle. If your custom subnet contains a route table when you create your cluster, AKS acknowledges the existing route table during cluster operations and adds/updates rules accordingly for cloud provider operations.
> [!WARNING]
-> Custom rules can be added to the custom route table and updated. However, rules are added by the Kubernetes cloud provider which must not be updated or removed. Rules such as 0.0.0.0/0 must always exist on a given route table and map to the target of your internet gateway, such as an NVA or other egress gateway. Take caution when updating rules that only your custom rules are being modified.
+> You can add/update custom rules on the custom route table. However, rules are added by the Kubernetes cloud provider which can't be updated or removed. Rules such as *0.0.0.0/0* must always exist on a given route table and map to the target of your internet gateway, such as an NVA or other egress gateway. Take caution when updating rules.
Learn more about setting up a [custom route table][custom-route-table].
-Kubenet networking requires organized route table rules to successfully route requests. Due to this design, route tables must be carefully maintained for each cluster which relies on it. Multiple clusters cannot share a route table because pod CIDRs from different clusters may overlap which causes unexpected and broken routing. When configuring multiple clusters on the same virtual network or dedicating a virtual network to each cluster, ensure the following limitations are considered.
-
-Limitations:
+kubenet networking requires organized route table rules to successfully route requests. Due to this design, route tables must be carefully maintained for each cluster that relies on it. Multiple clusters can't share a route table because pod CIDRs from different clusters might overlap which causes unexpected and broken routing scenarios. When configuring multiple clusters on the same virtual network or dedicating a virtual network to each cluster, consider the following limitations:
* A custom route table must be associated to the subnet before you create the AKS cluster.
-* The associated route table resource cannot be updated after cluster creation. While the route table resource cannot be updated, custom rules can be modified on the route table.
-* Each AKS cluster must use a single, unique route table for all subnets associated with the cluster. You cannot reuse a route table with multiple clusters due to the potential for overlapping pod CIDRs and conflicting routing rules.
-* For system-assigned managed identity, it's only supported to provide your own subnet and route table via Azure CLI. That's because CLI will add the role assignment automatically. If you are using an ARM template or other clients, you must use a [user-assigned managed identity][Create an AKS cluster with user-assigned managed identities], assign permissions before cluster creation, and ensure the user-assigned identity has write permissions to your custom subnet and custom route table.
+* The associated route table resource can't be updated after cluster creation. However, custom rules can be modified on the route table.
+* Each AKS cluster must use a single, unique route table for all subnets associated with the cluster. You can't reuse a route table with multiple clusters due to the potential for overlapping pod CIDRs and conflicting routing rules.
+* For system-assigned managed identity, it's only supported to provide your own subnet and route table via Azure CLI because Azure CLI automatically adds the role assignment. If you're using an ARM template or other clients, you must use a [user-assigned managed identity][Create an AKS cluster with user-assigned managed identities], assign permissions before cluster creation, and ensure the user-assigned identity has write permissions to your custom subnet and custom route table.
* Using the same route table with multiple AKS clusters isn't supported. > [!NOTE]
-> To create and use your own VNet and route table with `kubenet` network plugin, you need to use [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For system-assigned control plane identity, the identity ID cannot be retrieved before creating a cluster, which causes a delay during role assignment.
->
-> To create and use your own VNet and route table with `azure` network plugin, both system-assigned and user-assigned managed identities are supported. But user-assigned managed identity is more recommended for BYO scenarios.
+> When you create and use your own VNet and route table with the kubenet network plugin, you need to use a [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For a system-assigned control plane identity, you can't retrieve the identity ID before creating a cluster, which causes a delay during role assignment.
+>
+> Both system-assigned and user-assigned managed identities are supported when you create and use your own VNet and route table with the azure network plugin. We highly recommend using a user-assigned managed identity for BYO scenarios.
+
+### Add a route table with a user-assigned managed identity to your AKS cluster
After creating a custom route table and associating it with a subnet in your virtual network, you can create a new AKS cluster specifying your route table with a user-assigned managed identity. You need to use the subnet ID for where you plan to deploy your AKS cluster. This subnet also must be associated with your custom route table.
-```azurecli-interactive
-# Find your subnet ID
-az network vnet subnet list --resource-group
- --vnet-name
- [--subscription]
-```
+1. Get the subnet ID using the [`az network vnet subnet list`][az-network-vnet-subnet-list] command.
+
+ ```azurecli-interactive
+ az network vnet subnet list --resource-group myResourceGroup --vnet-name myAKSVnet [--subscription]
+ ```
+
+2. Create an AKS cluster with a custom subnet pre-configured with a route table using the [`az aks create`][az-aks-create] command and providing your values for the `--vnet-subnet-id`, `--enable-managed-identity`, and `--assign-identity` parameters.
-```azurecli-interactive
-# Create a kubernetes cluster with with a custom subnet preconfigured with a route table
-az aks create -g myResourceGroup -n myManagedCluster --vnet-subnet-id mySubnetIDResourceID --enable-managed-identity --assign-identity controlPlaneIdentityResourceID
-```
+ ```azurecli-interactive
+ az aks create -g myResourceGroup -n myManagedCluster --vnet-subnet-id mySubnetIDResourceID --enable-managed-identity --assign-identity controlPlaneIdentityResourceID
+ ```
## Next steps
-With an AKS cluster deployed into your existing virtual network subnet, you can now use the cluster as normal. Get started with [creating new apps using Helm][develop-helm] or [deploy existing apps using Helm][use-helm].
+This article showed you how to deploy your AKS cluster into your existing virtual network subnet. Now, you can start [creating new apps using Helm][develop-helm] or [deploying existing apps using Helm][use-helm].
<!-- LINKS - External --> [cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
With an AKS cluster deployed into your existing virtual network subnet, you can
[aks-network-nsg]: concepts-network.md#network-security-groups [az-group-create]: /cli/azure/group#az_group_create [az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create
-[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
[az-network-vnet-show]: /cli/azure/network/vnet#az_network_vnet_show [az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_show
+[az-network-vnet-subnet-list]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_list
[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create [az-aks-create]: /cli/azure/aks#az_aks_create [byo-subnet-route-table]: #bring-your-own-subnet-and-route-table-with-kubenet
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Title: Introduction to Azure Kubernetes Service description: Learn the features and benefits of Azure Kubernetes Service to deploy and manage container-based applications in Azure. Previously updated : 11/18/2022- Last updated : 05/02/2023 # What is Azure Kubernetes Service?
To secure your AKS clusters, see [Integrate Azure AD with AKS][aks-aad].
### Integrated logging and monitoring
-[Azure Monitor for Container Health][azure-monitor] collects memory and processor performance metrics from containers, nodes, and controllers within your AKS clusters and deployed applications. You can review both container logs and [the Kubernetes logs][aks-master-logs], which are:
+[Container Insights][container-insights] is a feature in [Azure Monitor][azure-monitor-overview] that monitors the health and performance of managed Kubernetes clusters hosted on AKS and provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios. It captures platform metrics and resource logs from containers, nodes, and controllers within your AKS clusters and deployed applications that are available in Kubernetes through the Metrics API.
-* Stored in an [Azure Log Analytics][azure-logs] workspace.
-* Available through the Azure portal, Azure CLI, or a REST endpoint.
+Container Insights has native integration with AKS, like collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks or integration with Grafana. It can also collect Prometheus metrics and send them to [Azure Monitor managed service for Prometheus][azure-monitor-managed-prometheus], and all together deliver end-to-end observability.
-For more information, see [Monitor AKS container health][container-health].
+Logs from the AKS control plane components are collected separately in Azure as resource logs and sent to different locations, such as [Azure Monitor Logs][azure-monitor-logs]. For more information, see [Collect resource logs][collect-resource-logs].
## Clusters and nodes
Learn more about deploying and managing AKS.
[azure-devops]: ../devops-project/overview.md [azure-disk]: ./azure-disk-csi.md [azure-files]: ./azure-files-csi.md
-[container-health]: ../azure-monitor/containers/container-insights-overview.md
[aks-master-logs]: monitor-aks-reference.md#resource-logs [aks-supported versions]: supported-kubernetes-versions.md [concepts-clusters-workloads]: concepts-clusters-workloads.md
Learn more about deploying and managing AKS.
[conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md [aad]: managed-azure-ad.md [aks-monitor]: monitor-aks.md
-[azure-monitor]: /previous-versions/azure/azure-monitor/containers/containers
-[azure-logs]: ../azure-monitor/logs/log-analytics-overview.md
+[azure-monitor-overview]: ../azure-monitor/overview.md
+[container-insights]: ../azure-monitor/containers/container-insights-overview.md
+[azure-monitor-managed-prometheus]: ../azure-monitor/essentials/prometheus-metrics-overview.md
+[collect-resource-logs]: monitor-aks.md#collect-resource-logs
+[azure-monitor-logs]: ../azure-monitor/logs/data-platform-logs.md
[helm]: quickstart-helm.md [aks-best-practices]: best-practices.md
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Bicep
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you'll:
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
* Deploy an AKS cluster using a Bicep file. * Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] * This article requires version 2.20.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+* This article requires an existing Azure resource group. If you need to create one, you can use the [`az group create`][az-group-create] command or the [`New-AzAksCluster`][new-az-aks-cluster] cmdlet.
### [Azure PowerShell](#tab/azure-powershell)
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
### Create an SSH key pair
-To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location.
- 1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.-
-1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096:
+2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] Azure CLI command or the `ssh-keygen` command.
```console
+ # Create an SSH key pair using Azure CLI
+ az sshkey create --name "mySSHKey" --resource-group "myResourceGroup"
+
+ # Create an SSH key pair using ssh-keygen
ssh-keygen -t rsa -b 4096 ```
For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template
## Deploy the Bicep file 1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
- # [CLI](#tab/CLI)
+> [!IMPORTANT]
+> The Bicep file sets the `clusterName` param to the string *aks101cluster*. If you want to use a different cluster name, make sure to update the string to your preferred cluster name before saving the file to your computer.
+
+2. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [Azure CLI](#tab/azure-cli)
```azurecli
- az group create --name myResourceGroup --location eastus
- az deployment group create --resource-group myResourceGroup --template-file main.bicep --parameters clusterName=<cluster-name> dnsPrefix=<dns-prefix> linuxAdminUsername=<linux-admin-username> sshRSAPublicKey='<ssh-key>'
+ az deployment group create --resource-group myResourceGroup --template-file main.bicep --parameters dnsPrefix=<dns-prefix> linuxAdminUsername=<linux-admin-username> sshRSAPublicKey='<ssh-key>'
```
- # [PowerShell](#tab/PowerShell)
+ # [Azure PowerShell](#tab/azure-powershell)
```azurepowershell New-AzResourceGroup -Name myResourceGroup -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -clusterName=<cluster-name> -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>"
+ New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>"
``` Provide the following values in the commands:
- * **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*.
* **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*. * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*. * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
-
1. Copy in the following YAML definition: ```yaml
Remove-AzResourceGroup -Name myResourceGroup
In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+To learn more about AKS and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
To learn more about AKS, and walk through a complete code to deployment example,
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
[aks-quickstart-templates]: https://azure.microsoft.com/resources/templates/?term=Azure+Kubernetes+Service <!-- LINKS - internal --> [kubernetes-concepts]: ../concepts-clusters-workloads.md
-[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
-[az-aks-browse]: /cli/azure/aks#az_aks_browse
-[az-aks-create]: /cli/azure/aks#az_aks_create
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
To learn more about AKS, and walk through a complete code to deployment example,
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
-[azure-cli-install]: /cli/azure/install-azure-cli
[install-azure-powershell]: /powershell/azure/install-az-ps [connect-azaccount]: /powershell/module/az.accounts/Connect-AzAccount
-[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
-[azure-portal]: https://portal.azure.com
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: ../concepts-network.md#services [ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md
-[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
+[new-az-aks-cluster]: /powershell/module/az.aks/new-azakscluster
+[az-sshkey-create]: /cli/azure/sshkey#az_sshkey_create
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
This article shows you how to upgrade AKS cluster node images and how to update
Check for available node image upgrades using the [`az aks nodepool get-upgrades`][az-aks-nodepool-get-upgrades] command.
-```azurecli
+```azurecli-interactive
az aks nodepool get-upgrades \ --nodepool-name mynodepool \ --cluster-name myAKSCluster \
The example output shows `AKSUbuntu-1604-2020.10.28` as the `latestNodeImageVers
Compare the latest version with your current node image version using the [`az aks nodepool show`][az-aks-nodepool-show] command.
-```azurecli
+```azurecli-interactive
az aks nodepool show \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
In this example, there's an available node image version upgrade, which is from
Upgrade the node image using the [`az aks upgrade`][az-aks-upgrade] command with the `--node-image-only` flag.
-```azurecli
+```azurecli-interactive
az aks upgrade \ --resource-group myResourceGroup \ --name myAKSCluster \
You can check the status of the node images using the `kubectl get nodes` comman
>[!NOTE] > This command may differ slightly depending on the shell you use. See the [Kubernetes JSONPath documentation][kubernetes-json-path] for more information on Windows/PowerShell environments.
-```azurecli
+```bash
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' ``` When the upgrade is complete, use the [`az aks show`][az-aks-show] command to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property.
-```azurecli
+```azurecli-interactive
az aks show \ --resource-group myResourceGroup \ --name myAKSCluster
az aks show \
To update the OS image of a node pool without doing a Kubernetes cluster upgrade, use the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command with the `--node-image-only` flag.
-```azurecli
+```azurecli-interactive
az aks nodepool upgrade \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
You can check the status of the node images with the `kubectl get nodes` command
>[!NOTE] > This command may differ slightly depending on the shell you use. See the [Kubernetes JSONPath documentation][kubernetes-json-path] for more information on Windows/PowerShell environments.
-```azurecli
+```bash
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' ``` When the upgrade is complete, use the [`az aks nodepool show`][az-aks-nodepool-show] command to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property.
-```azurecli
+```azurecli-interactive
az aks nodepool show \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
To speed up the node image upgrade process, you can upgrade your node images usi
If you'd like to increase the speed of upgrades, use the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--max-surge` flag to configure the number of nodes used for upgrades. To learn more about the trade-offs of various `--max-surge` settings, see [Customize node surge upgrade][max-surge].
-```azurecli
+```azurecli-interactive
az aks nodepool update \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
az aks nodepool update \
You can check the status of the node images with the `kubectl get nodes` command.
-```azurecli
+```bash
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' ``` Use `az aks nodepool show` to get the updated node pool details. The current node image is shown in the `nodeImageVersion` property.
-```azurecli
+```azurecli-interactive
az aks nodepool show \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://en.wikipedia.org/
| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | 1.27 | Apr 2023 | May 2023 | Jun 2023 | Jun 2024
+## AKS Components Breaking Changes by Version
+
+Note important changes to make, before you upgrade to any of the available minor versions per below.
+
+|AKS Component/Add on | v1.24 | v1.25 | v1.26 |
+|--|-|--||
+| Overlay VPA | 0.11.0, no breaking changes |0.12.0</br><b>Breaking Changes:</b></br>Switch to using policy [v1 API](https://github.com/kubernetes/autoscaler/pull/4895) and Switch to using [CronJobs v1 API](https://github.com/kubernetes/autoscaler/pull/4887)|0.12.0</br><b>Breaking Changes:</b></br>Switch to using policy [v1 API](https://github.com/kubernetes/autoscaler/pull/4895) and Switch to using [CronJobs v1 API](https://github.com/kubernetes/autoscaler/pull/4887)
+|OS Images (Ubuntu)| Ubuntu 18.04 by default with cgroupv1 | Ubuntu 22.04 by default with cgroupv2.</br><b>Breaking Changes:</b></br> If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2 | Ubuntu 22.04 by default with cgroupv2.</br><b>Breaking Changes:</b></br>If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
++ ## Alias minor version > [!NOTE]
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitiv
> Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >
-> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022 and the projected will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement).
-> The AKS Managed add-on will begin deprecation in Sept. 2023.
+> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement).
+> The AKS Managed add-on begins deprecation in Sept. 2023.
## Before you begin
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
The following example shows how to create a virtual network by using Resource Ma
The following example shows how to create an API Management instance in a virtual network configured for internal access only.
-1. API Management stv2 requires a public IP with a `DomainNameLabel`:
+1. API Management stv2 requires a public IP with a unique `DomainNameLabel`:
```powershell $apimPublicIpAddressId = New-AzPublicIpAddress -ResourceGroupName $resGroupName -name "pip-apim" -location $location `
To set up custom domain names in API Management:
1. Initialize the following variables with the details of the certificates with private keys for the domains and the trusted root certificate. In this example, we use `api.contoso.net`, `portal.contoso.net`, and `management.contoso.net`. ```powershell
- $gatewayHostname = "api.$domain" # API gateway host
- $portalHostname = "portal.$domain" # API developer portal host
- $managementHostname = "management.$domain" # API management endpoint host
- $gatewayCertPfxPath = "C:\Users\Contoso\gateway.pfx" # Full path to api.contoso.net .pfx file
- $portalCertPfxPath = "C:\Users\Contoso\portal.pfx" # Full path to portal.contoso.net .pfx file
- $managementCertPfxPath = "C:\Users\Contoso\management.pfx" # Full path to management.contoso.net .pfx file
- $gatewayCertPfxPassword = "certificatePassword123" # Password for api.contoso.net pfx certificate
- $portalCertPfxPassword = "certificatePassword123" # Password for portal.contoso.net pfx certificate
- $managementCertPfxPassword = "certificatePassword123" # Password for management.contoso.net pfx certificate
+ $gatewayHostname = "api.$domain" # API gateway host
+ $portalHostname = "portal.$domain" # API developer portal host
+ $managementHostname = "management.$domain" # API management endpoint host
+ $gatewayCertPfxPath = "C:\Users\Contoso\gateway.pfx" # Full path to api.contoso.net .pfx file
+ $portalCertPfxPath = "C:\Users\Contoso\portal.pfx" # Full path to portal.contoso.net .pfx file
+ $managementCertPfxPath = "C:\Users\Contoso\management.pfx" # Full path to management.contoso.net .pfx file
+ $gatewayCertPfxPassword = "certificatePassword123" # Password for api.contoso.net pfx certificate
+ $portalCertPfxPassword = "certificatePassword123" # Password for portal.contoso.net pfx certificate
+ $managementCertPfxPassword = "certificatePassword123" # Password for management.contoso.net pfx certificate
# Path to trusted root CER file used in Application Gateway HTTP settings
- $trustedRootCertCerPath = "C:\Users\Contoso\trustedroot.cer" # Full path to contoso.net trusted root .cer file
+ $trustedRootCertCerPath = "C:\Users\Contoso\trustedroot.cer" # Full path to contoso.net trusted root .cer file
$certGatewayPwd = ConvertTo-SecureString -String $gatewayCertPfxPassword -AsPlainText -Force $certPortalPwd = ConvertTo-SecureString -String $portalCertPfxPassword -AsPlainText -Force
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md
resource ehLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/log
loggerType: 'azureEventHub' description: 'Event hub logger with system-assigned managed identity' credentials: {
- endpointAddress: 'https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
+ endpointAddress: '<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
identityClientId: 'systemAssigned' name: 'ApimEventHub' }
Include a JSON snippet similar to the following in your Azure Resource Manager t
"description": "Event hub logger with system-assigned managed identity", "resourceId": "<EventHubsResourceID>", "credentials": {
- "endpointAddress": "https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
"identityClientId": "SystemAssigned", "name": "ApimEventHub" },
resource ehLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/logge
loggerType: 'azureEventHub' description: 'Event hub logger with user-assigned managed identity' credentials: {
- endpointAddress: 'https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
+ endpointAddress: '<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
identityClientId: '<ClientID>' name: 'ApimEventHub' }
Include a JSON snippet similar to the following in your Azure Resource Manager t
"description": "Event hub logger with user-assigned managed identity", "resourceId": "<EventHubsResourceID>", "credentials": {
- "endpointAddress": "https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
"identityClientId": "<ClientID>", "name": "ApimEventHub" },
api-management Invoke Dapr Binding Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/invoke-dapr-binding-policy.md
The `invoke-dapr-binding` policy instructs API Management gateway to trigger an outbound Dapr [binding](https://github.com/dapr/docs/blob/master/README.md). The policy accomplishes that by making an HTTP POST request to `http://localhost:3500/v1.0/bindings/{{bind-name}},` replacing the template parameter and adding content specified in the policy statement.
-The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime is responsible for invoking the external resource represented by the binding. Learn more about [Dapr integration with API Management](api-management-dapr-policies.md).
+The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime is responsible for invoking the external resource represented by the binding. Learn more about [Dapr integration with API Management](self-hosted-gateway-enable-dapr.md).
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The policy assumes that Dapr runtime is running in a sidecar container in the sa
### Usage notes
-Dapr support must be [enabled](api-management-dapr-policies.md#enable-dapr-support-in-the-self-hosted-gateway) in the self-hosted gateway.
+Dapr support must be [enabled](self-hosted-gateway-enable-dapr.md) in the self-hosted gateway.
## Example
api-management Publish To Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-to-dapr-policy.md
The `publish-to-dapr` policy instructs API Management gateway to send a message to a Dapr Publish/Subscribe topic. The policy accomplishes that by making an HTTP POST request to `http://localhost:3500/v1.0/publish/{{pubsub-name}}/{{topic}}`, replacing template parameters and adding content specified in the policy statement.
-The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime implements the Pub/Sub semantics. Learn more about [Dapr integration with API Management](api-management-dapr-policies.md).
+The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime implements the Pub/Sub semantics. Learn more about [Dapr integration with API Management](self-hosted-gateway-enable-dapr.md).
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The policy assumes that Dapr runtime is running in a sidecar container in the sa
### Usage notes
-Dapr support must be [enabled](api-management-dapr-policies.md#enable-dapr-support-in-the-self-hosted-gateway) in the self-hosted gateway.
+Dapr support must be [enabled](self-hosted-gateway-enable-dapr.md) in the self-hosted gateway.
## Example
api-management Self Hosted Gateway Enable Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-enable-dapr.md
+
+ Title: Enable Dapr support in self-hosted gateway | Azure API Management
+description: Learn now to enable Dapr support in the self-hosted gateway of Azure API Management to expose and manage Dapr microservices as APIs.
+++++ Last updated : 05/01/2023+++
+# Enable Dapr support in the self-hosted gateway
+
+Dapr integration in API Management enables operations teams to directly expose Dapr microservices deployed on Kubernetes clusters as APIs, and make those APIs discoverable and easily consumable by developers with proper controls across multiple Dapr deploymentsΓÇöwhether in the cloud, on-premises, or on the edge.
+
+## About Dapr
+
+Dapr is a portable runtime for building stateless and stateful microservices-based applications with any language or framework. It codifies the common microservice patterns, like service discovery and invocation with build-in retry logic, publish-and-subscribe with at-least-once delivery semantics, or pluggable binding resources to ease composition using external services. Go to [dapr.io](https://dapr.io) for detailed information and instruction on how to get started with Dapr.
+
+## Enable Dapr support
+
+To turn on Dapr support in the API Management self-hosted gateway, add the following [Dapr annotations](https://docs.dapr.io/reference/arguments-annotations-overview/) to the [Kubernetes deployment template](how-to-deploy-self-hosted-gateway-kubernetes.md), replacing `app-name` with a desired name. A complete walkthrough of setting up and using API Management with Dapr is available [here](https://aka.ms/apim/dapr/walkthru).
+
+```yml
+template:
+ metadata:
+ labels:
+ app: app-name
+ annotations:
+ dapr.io/enabled: "true"
+ dapr.io/app-id: "app-name"
+```
+> [!TIP]
+> You can also deploy the [self-hosted gateway with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) and use the Dapr configuration options.
+
+## Dapr integration policies
+
+API Management provides specific [policies](api-management-policies.md#dapr-integration-policies) to interact with Dapr APIs exposed through the self-hosted gateway.
+
+## Next steps
+
+* Learn more about [Dapr integration in API Management](https://cloudblogs.microsoft.com/opensource/2020/09/22/announcing-dapr-integration-azure-api-management-service-apim/)
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
helm install azure-api-management-gateway \
- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md) - [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md) - [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)----
+- [Enable Dapr support on self-hosted gateway](self-hosted-gateway-enable-dapr.md)
api-management Set Backend Service Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-dapr-policy.md
The `set-backend-service` policy sets the target URL for the current request to `http://localhost:3500/v1.0/invoke/{app-id}[.{ns-name}]/method/{method-name}`, replacing template parameters with values specified in the policy statement.
-The policy assumes that Dapr runs in a sidecar container in the same pod as the gateway. Upon receiving the request, Dapr runtime performs service discovery and actual invocation, including possible protocol translation between HTTP and gRPC, retries, distributed tracing, and error handling. Learn more about [Dapr integration with API Management](api-management-dapr-policies.md).
+The policy assumes that Dapr runs in a sidecar container in the same pod as the gateway. Upon receiving the request, Dapr runtime performs service discovery and actual invocation, including possible protocol translation between HTTP and gRPC, retries, distributed tracing, and error handling. Learn more about [Dapr integration with API Management](self-hosted-gateway-enable-dapr.md).
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The policy assumes that Dapr runs in a sidecar container in the same pod as the
### Usage notes
-Dapr support must be [enabled](api-management-dapr-policies.md#enable-dapr-support-in-the-self-hosted-gateway) in the self-hosted gateway.
+Dapr support must be [enabled](self-hosted-gateway-enable-dapr.md) in the self-hosted gateway.
## Example
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md
In order to read secrets from Key Vault, you need to have a vault created and gi
### Access network-restricted vaults
-If your vault is configured with [network restrictions](../key-vault/general/overview-vnet-service-endpoints.md), you will also need to ensure that the application has network access.
+If your vault is configured with [network restrictions](../key-vault/general/overview-vnet-service-endpoints.md), you will also need to ensure that the application has network access. Vaults shouldn't depend on the app's public outbound IPs because the origin IP of the secret request could be different. Instead, the vault should be configured to accept traffic from a virtual network used by the app.
1. Make sure the application has outbound networking capabilities configured, as described in [App Service networking features](./networking-features.md) and [Azure Functions networking options](../azure-functions/functions-networking-options.md).
If your vault is configured with [network restrictions](../key-vault/general/ove
2. Make sure that the vault's configuration accounts for the network or subnet through which your app will access it. + > [!NOTE] > Windows container currently does not support Key Vault references over VNet Integration. + ### Access vaults with a user-assigned identity Some apps need to reference secrets at creation time, when a system-assigned identity would not yet be available. In these cases, a user-assigned identity can be created and given access to the vault in advance.
An example pseudo-template for a function app might look like the following:
"[resourceId('Microsoft.KeyVault/vaults/secrets', variables('keyVaultName'), variables('appInsightsKeyName'))]" ], "properties": {
- "AzureWebJobsStorage": "[concat('@Microsoft.KeyVault(SecretUri=', reference(variables('storageConnectionStringResourceId')).secretUriWithVersion, ')')]",
- "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING": "[concat('@Microsoft.KeyVault(SecretUri=', reference(variables('storageConnectionStringResourceId')).secretUriWithVersion, ')')]",
- "APPINSIGHTS_INSTRUMENTATIONKEY": "[concat('@Microsoft.KeyVault(SecretUri=', reference(variables('appInsightsKeyResourceId')).secretUriWithVersion, ')')]",
+ "AzureWebJobsStorage": "[concat('@Microsoft.KeyVault(SecretUri=', reference(variables('storageConnectionStringName')).secretUriWithVersion, ')')]",
+ "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING": "[concat('@Microsoft.KeyVault(SecretUri=', reference(variables('storageConnectionStringName')).secretUriWithVersion, ')')]",
+ "APPINSIGHTS_INSTRUMENTATIONKEY": "[concat('@Microsoft.KeyVault(SecretUri=', reference(variables('appInsightsKeyName')).secretUriWithVersion, ')')]",
"WEBSITE_ENABLE_SYNC_UPDATE_SITE": "true" //... }
app-service How To Custom Domain Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md
description: Configure a custom domain suffix for the Azure App Service Environm
Previously updated : 02/09/2023 Last updated : 05/03/2023 zone_pivot_groups: app-service-environment-portal-arm
Unlike earlier versions, the FTPS endpoints for your App Services on your App Se
## Prerequisites - ILB variation of App Service Environment v3.-- Valid SSL/TLS certificate must be stored in an Azure Key Vault. For more information on using certificates with App Service, see [Add a TLS/SSL certificate in Azure App Service](../configure-ssl-certificate.md).
+- Valid SSL/TLS certificate must be stored in an Azure Key Vault in .PFX format. For more information on using certificates with App Service, see [Add a TLS/SSL certificate in Azure App Service](../configure-ssl-certificate.md).
### Managed identity
If you choose to use Azure role-based access control to manage access to your ke
### Certificate
-The certificate for custom domain suffix must be stored in an Azure Key Vault. App Service Environment will use the managed identity you selected to get the certificate. The key vault must be publicly accessible, however you can lock down the key vault by restricting access to your App Service Environment's outbound IPs. You can find your App Service Environment's outbound IPs under "Default outbound addresses" on the **IP addresses** page for your App Service Environment. You'll need to add both IPs to your key vault's firewall rules. For more information on key vault network security and firewall rules, see [Configure Azure Key Vault firewalls and virtual networks](../../key-vault/general/network-security.md#key-vault-firewall-enabled-ipv4-addresses-and-rangesstatic-ips). The key vault also must not have any [private endpoint connections](../../private-link/private-endpoint-overview.md).
+The certificate for custom domain suffix must be stored in an Azure Key Vault. The certificate must be uploaded in .PFX format. Certificates in .PEM format are not supported at this time. App Service Environment will use the managed identity you selected to get the certificate. The key vault must be publicly accessible, however you can lock down the key vault by restricting access to your App Service Environment's outbound IPs. You can find your App Service Environment's outbound IPs under "Default outbound addresses" on the **IP addresses** page for your App Service Environment. You'll need to add both IPs to your key vault's firewall rules. For more information on key vault network security and firewall rules, see [Configure Azure Key Vault firewalls and virtual networks](../../key-vault/general/network-security.md#key-vault-firewall-enabled-ipv4-addresses-and-rangesstatic-ips). The key vault also must not have any [private endpoint connections](../../private-link/private-endpoint-overview.md).
:::image type="content" source="./media/custom-domain-suffix/key-vault-networking.png" alt-text="Screenshot of a sample networking page for key vault to allow custom domain suffix feature.":::
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
Previously updated : 04/04/2023 Last updated : 05/03/2023
HTTP 400 response codes are commonly observed when:
- Mutual authentication is configured and unable to properly negotiate. - The request is not compliant to RFC.
-Some of the common reasons for the request to be non-compliant to RFC is listed.So review the urls/requests from the clients and ensure it's compliant to RFC.
+Some common reasons for the request to be non-compliant to RFC are:
| Category | Examples | | - | - |
Some of the common reasons for the request to be non-compliant to RFC is listed.
| Duplicate Headers | Authorization:\<base64 encoded content\>,Authorization: \<base64 encoded content\> | | Invalid value in Content-Length | Content-Length: **abc**,Content-Length: **-10**| -- For cases when mutual authentication is configured, several scenarios can lead to an HTTP 400 response being returned the client, such as: - Client certificate isn't presented, but mutual authentication is enabled. - DN validation is enabled and the DN of the client certificate doesn't match the DN of the specified certificate chain.
For cases when mutual authentication is configured, several scenarios can lead t
- OCSP Client Revocation check is enabled, but OCSP responder isn't provided in the certificate. For more information about troubleshooting mutual authentication, see [Error code troubleshooting](mutual-authentication-troubleshooting.md#solution-2).+ #### 401 ΓÇô Unauthorized
-An HTTP 401 unauthorized response can be returned when backend pool is configured with [NTLM](/windows/win32/secauthn/microsoft-ntlm?redirectedfrom=MSDN) authentication.
+An HTTP 401 unauthorized response can be returned when the backend pool is configured with [NTLM](/windows/win32/secauthn/microsoft-ntlm?redirectedfrom=MSDN) authentication.
There are several ways to resolve this: - Allow anonymous access on backend pool.-- Configure the probe to send the request to another ΓÇ£fakeΓÇ¥ site that doesnΓÇÖt require NTLM.
+- Configure the probe to send the request to another "fake" site that doesn't require NTLM.
- Not recommended, as this will not tell us if the actual site behind the application gateway is active or not. - Configure application gateway to allow 401 responses as valid for the probes: [Probe matching conditions](/azure/application-gateway/application-gateway-probe-overview).
- #### 403 ΓÇô Forbidden
+
+#### 403 ΓÇô Forbidden
HTTP 403 Forbidden is presented when customers are utilizing WAF skus and have WAF configured in Prevention mode. If enabled WAF rulesets or custom deny WAF rules match the characteristics of an inbound request, the client is presented a 403 forbidden response.
Azure application Gateway V2 SKU sent HTTP 504 errors if the backend response ti
## Next steps If the information in this article doesn't help to resolve the issue, [submit a support ticket](https://azure.microsoft.com/support/options/).----
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
Title: Migrate from v1 to v2 - Azure Application Gateway
-description: This article shows you how to migrate Azure Application Gateway and Web Application Firewall from v1 to v2
+ Title: Migrate from V1 to V2 - Azure Application Gateway
+description: This article shows you how to migrate Azure Application Gateway and Web Application Firewall from V1 to V2
This article primarily helps with the configuration migration. The traffic migra
An Azure PowerShell script is provided in this document. It performs the following operations to help you with the configuration:
-* Creates a new Standard_v2 or WAF_v2 gateway in a virtual network subnet that you specify.
-* Seamlessly copies the configuration associated with the v1 Standard or WAF gateway to the newly created Standard_V2 or WAF_V2 gateway.
+* Creates a new Standard_V2 or WAF_V2 gateway in a virtual network subnet that you specify.
+* Seamlessly copies the configuration associated with the V1 Standard or WAF gateway to the newly created Standard_V2 or WAF_V2 gateway.
## Downloading the script
To run the script:
``` AzureAppGwMigration.ps1
- -resourceId <v1 application gateway Resource ID>
+ -resourceId <V1 application gateway Resource ID>
-subnetAddressRange <subnet space you want to use> -appgwName <string to use to append> -AppGwResourceGroupName <resource group name you want to use>
To run the script:
``` Parameters for the script:
- * **resourceId: [String]: Required** - This is the Azure Resource ID for your existing Standard v1 or WAF v1 gateway. To find this string value, navigate to the Azure portal, select your application gateway or WAF resource, and click the **Properties** link for the gateway. The Resource ID is located on that page.
+ * **resourceId: [String]: Required** - This is the Azure Resource ID for your existing Standard V1 or WAF V1 gateway. To find this string value, navigate to the Azure portal, select your application gateway or WAF resource, and click the **Properties** link for the gateway. The Resource ID is located on that page.
You can also run the following Azure PowerShell commands to get the Resource ID: ```azurepowershell
- $appgw = Get-AzApplicationGateway -Name <v1 gateway name> -ResourceGroupName <resource group Name>
+ $appgw = Get-AzApplicationGateway -Name <V1 gateway name> -ResourceGroupName <resource group Name>
$appgw.Id ```
- * **subnetAddressRange: [String]: Required** - This is the IP address space that you've allocated (or want to allocate) for a new subnet that contains your new v2 gateway. This must be specified in the CIDR notation. For example: 10.0.0.0/24. You don't need to create this subnet in advance but the CIDR needs to be part of the VNET address space. The script creates it for you if it doesn't exist and if it exists, it will use the existing one (make sure the subnet is either empty, contains only v2 Gateway if any, and has enough available IPs).
- * **appgwName: [String]: Optional**. This is a string you specify to use as the name for the new Standard_v2 or WAF_v2 gateway. If this parameter isn't supplied, the name of your existing v1 gateway will be used with the suffix *_v2* appended.
- * **AppGwResourceGroupName: [String]: Optional**. Name of resource group where you want v2 Application Gateway resources to be created (default value will be `<v1-app-gw-rgname>`)
- * **sslCertificates: [PSApplicationGatewaySslCertificate]: Optional**. A comma-separated list of PSApplicationGatewaySslCertificate objects that you create to represent the TLS/SSL certs from your v1 gateway must be uploaded to the new v2 gateway. For each of your TLS/SSL certs configured for your Standard v1 or WAF v1 gateway, you can create a new PSApplicationGatewaySslCertificate object via the `New-AzApplicationGatewaySslCertificate` command shown here. You need the path to your TLS/SSL Cert file and the password.
+ * **subnetAddressRange: [String]: Required** - This is the IP address space that you've allocated (or want to allocate) for a new subnet that contains your new V2 gateway. This must be specified in the CIDR notation. For example: 10.0.0.0/24. You don't need to create this subnet in advance but the CIDR needs to be part of the VNET address space. The script creates it for you if it doesn't exist and if it exists, it will use the existing one (make sure the subnet is either empty, contains only V2 Gateway if any, and has enough available IPs).
+ * **appgwName: [String]: Optional**. This is a string you specify to use as the name for the new Standard_V2 or WAF_V2 gateway. If this parameter isn't supplied, the name of your existing V1 gateway will be used with the suffix *_V2* appended.
+ * **AppGwResourceGroupName: [String]: Optional**. Name of resource group where you want V2 Application Gateway resources to be created (default value will be `<V1-app-gw-rgname>`)
+ * **sslCertificates: [PSApplicationGatewaySslCertificate]: Optional**. A comma-separated list of PSApplicationGatewaySslCertificate objects that you create to represent the TLS/SSL certs from your V1 gateway must be uploaded to the new V2 gateway. For each of your TLS/SSL certs configured for your Standard V1 or WAF V1 gateway, you can create a new PSApplicationGatewaySslCertificate object via the `New-AzApplicationGatewaySslCertificate` command shown here. You need the path to your TLS/SSL Cert file and the password.
- This parameter is only optional if you don't have HTTPS listeners configured for your v1 gateway or WAF. If you have at least one HTTPS listener setup, you must specify this parameter.
+ This parameter is only optional if you don't have HTTPS listeners configured for your V1 gateway or WAF. If you have at least one HTTPS listener setup, you must specify this parameter.
```azurepowershell $password = ConvertTo-SecureString <cert-password> -AsPlainText -Force
To run the script:
``` To create a list of PSApplicationGatewayTrustedRootCertificate objects, see [New-AzApplicationGatewayTrustedRootCertificate](/powershell/module/Az.Network/New-AzApplicationGatewayTrustedRootCertificate).
- * **privateIpAddress: [String]: Optional**. A specific private IP address that you want to associate to your new v2 gateway. This must be from the same VNet that you allocate for your new v2 gateway. If this isn't specified, the script allocates a private IP address for your v2 gateway.
- * **publicIpResourceId: [String]: Optional**. The resourceId of existing public IP address (standard SKU) resource in your subscription that you want to allocate to the new v2 gateway. If this isn't specified, the script allocates a new public IP in the same resource group. The name is the v2 gateway's name with *-IP* appended.
- * **validateMigration: [switch]: Optional**. Use this parameter if you want the script to do some basic configuration comparison validations after the v2 gateway creation and the configuration copy. By default, no validation is done.
- * **enableAutoScale: [switch]: Optional**. Use this parameter if you want the script to enable AutoScaling on the new v2 gateway after it's created. By default, AutoScaling is disabled. You can always manually enable it later on the newly created v2 gateway.
+ * **privateIpAddress: [String]: Optional**. A specific private IP address that you want to associate to your new V2 gateway. This must be from the same VNet that you allocate for your new V2 gateway. If this isn't specified, the script allocates a private IP address for your V2 gateway.
+ * **publicIpResourceId: [String]: Optional**. The resourceId of existing public IP address (standard SKU) resource in your subscription that you want to allocate to the new V2 gateway. If this isn't specified, the script allocates a new public IP in the same resource group. The name is the V2 gateway's name with *-IP* appended.
+ * **validateMigration: [switch]: Optional**. Use this parameter if you want the script to do some basic configuration comparison validations after the V2 gateway creation and the configuration copy. By default, no validation is done.
+ * **enableAutoScale: [switch]: Optional**. Use this parameter if you want the script to enable AutoScaling on the new V2 gateway after it's created. By default, AutoScaling is disabled. You can always manually enable it later on the newly created V2 gateway.
1. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
To run the script:
### Caveats\Limitations
-* The new v2 gateway has new public and private IP addresses. It isn't possible to move the IP addresses associated with the existing v1 gateway seamlessly to v2. However, you can allocate an existing (unallocated) public or private IP address to the new v2 gateway.
-* You must provide an IP address space for another subnet within your virtual network where your v1 gateway is located. The script can't create the v2 gateway in any existing subnets that already have a v1 gateway. However, if the existing subnet already has a v2 gateway, that may still work provided there's enough IP address space.
-* If you have a network security group or user defined routes associated to the v2 gateway subnet, make sure they adhere to the [NSG requirements](../application-gateway/configuration-infrastructure.md#network-security-groups) and [UDR requirements](../application-gateway/configuration-infrastructure.md#supported-user-defined-routes) for a successful migration
+* The new V2 gateway has new public and private IP addresses. It isn't possible to move the IP addresses associated with the existing V1 gateway seamlessly to V2. However, you can allocate an existing (unallocated) public or private IP address to the new V2 gateway.
+* You must provide an IP address space for another subnet within your virtual network where your V1 gateway is located. The script can't create the V2 gateway in any existing subnets that already have a V1 gateway. However, if the existing subnet already has a V2 gateway, that may still work provided there's enough IP address space.
+* If you have a network security group or user defined routes associated to the V2 gateway subnet, make sure they adhere to the [NSG requirements](../application-gateway/configuration-infrastructure.md#network-security-groups) and [UDR requirements](../application-gateway/configuration-infrastructure.md#supported-user-defined-routes) for a successful migration
* [Virtual network service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) are currently not supported in an Application Gateway subnet.
-* To migrate a TLS/SSL configuration, you must specify all the TLS/SSL certs used in your v1 gateway.
-* If you have FIPS mode enabled for your V1 gateway, it won't be migrated to your new v2 gateway. FIPS mode isn't supported in v2.
+* To migrate a TLS/SSL configuration, you must specify all the TLS/SSL certs used in your V1 gateway.
+* If you have FIPS mode enabled for your V1 gateway, it won't be migrated to your new V2 gateway. FIPS mode isn't supported in V2.
* In case of Private IP only V1 gateway, the script generates a private and public IP address for the new V2 gateway. The Private IP only V2 gateway is currently in public preview. Once it becomes generally available, customers can utilize the script to transfer their private IP only V1 gateway to a private IP only V2 gateway.
-* Headers with names containing anything other than letters, digits, and hyphens are not passed to your application. This only applies to header names, not header values. This is a breaking change from v1.
-* NTLM and Kerberos authentication is not supported by Application Gateway v2. The script is unable to detect if the gateway is serving this type of traffic and may pose as a breaking change from v1 to v2 gateways if run.
+* Headers with names containing anything other than letters, digits, and hyphens are not passed to your application. This only applies to header names, not header values. This is a breaking change from V1.
+* NTLM and Kerberos authentication is not supported by Application Gateway V2. The script is unable to detect if the gateway is serving this type of traffic and may pose as a breaking change from V1 to V2 gateways if run.
## Traffic migration
-First, double check that the script successfully created a new v2 gateway with the exact configuration migrated over from your v1 gateway. You can verify this from the Azure portal.
+First, double check that the script successfully created a new V2 gateway with the exact configuration migrated over from your V1 gateway. You can verify this from the Azure portal.
-Also, send a small amount of traffic through the v2 gateway as a manual test.
+Also, send a small amount of traffic through the V2 gateway as a manual test.
Here are a few scenarios where your current application gateway (Standard) may receive client traffic, and our recommendations for each one:
-* **A custom DNS zone (for example, contoso.com) that points to the frontend IP address (using an A record) associated with your Standard v1 or WAF v1 gateway**.
+* **A custom DNS zone (for example, contoso.com) that points to the frontend IP address (using an A record) associated with your Standard V1 or WAF V1 gateway**.
- You can update your DNS record to point to the frontend IP or DNS label associated with your Standard_v2 application gateway. Depending on the TTL configured on your DNS record, it may take a while for all your client traffic to migrate to your new v2 gateway.
-* **A custom DNS zone (for example, contoso.com) that points to the DNS label (for example: *myappgw.eastus.cloudapp.azure.com* using a CNAME record) associated with your v1 gateway**.
+ You can update your DNS record to point to the frontend IP or DNS label associated with your Standard_V2 application gateway. Depending on the TTL configured on your DNS record, it may take a while for all your client traffic to migrate to your new V2 gateway.
+* **A custom DNS zone (for example, contoso.com) that points to the DNS label (for example: *myappgw.eastus.cloudapp.azure.com* using a CNAME record) associated with your V1 gateway**.
You have two choices:
- * If you use public IP addresses on your application gateway, you can do a controlled, granular migration using a Traffic Manager profile to incrementally route traffic (weighted traffic routing method) to the new v2 gateway.
+ * If you use public IP addresses on your application gateway, you can do a controlled, granular migration using a Traffic Manager profile to incrementally route traffic (weighted traffic routing method) to the new V2 gateway.
- You can do this by adding the DNS labels of both the v1 and v2 application gateways to the [Traffic Manager profile](../traffic-manager/traffic-manager-routing-methods.md#weighted-traffic-routing-method), and CNAMEing your custom DNS record (for example, `www.contoso.com`) to the Traffic Manager domain (for example, contoso.trafficmanager.net).
- * Or, you can update your custom domain DNS record to point to the DNS label of the new v2 application gateway. Depending on the TTL configured on your DNS record, it may take a while for all your client traffic to migrate to your new v2 gateway.
+ You can do this by adding the DNS labels of both the V1 and V2 application gateways to the [Traffic Manager profile](../traffic-manager/traffic-manager-routing-methods.md#weighted-traffic-routing-method), and CNAMEing your custom DNS record (for example, `www.contoso.com`) to the Traffic Manager domain (for example, contoso.trafficmanager.net).
+ * Or, you can update your custom domain DNS record to point to the DNS label of the new V2 application gateway. Depending on the TTL configured on your DNS record, it may take a while for all your client traffic to migrate to your new V2 gateway.
* **Your clients connect to the frontend IP address of your application gateway**.
- Update your clients to use the IP address(es) associated with the newly created v2 application gateway. We recommend that you don't use IP addresses directly. Consider using the DNS name label (for example, yourgateway.eastus.cloudapp.azure.com) associated with your application gateway that you can CNAME to your own custom DNS zone (for example, contoso.com).
+ Update your clients to use the IP address(es) associated with the newly created V2 application gateway. We recommend that you don't use IP addresses directly. Consider using the DNS name label (for example, yourgateway.eastus.cloudapp.azure.com) associated with your application gateway that you can CNAME to your own custom DNS zone (for example, contoso.com).
## Pricing considerations
-The pricing models are different for the Application Gateway v1 and v2 SKUs. V2 is charged based on consumption. See [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/) before migrating for pricing information.
+The pricing models are different for the Application Gateway V1 and V2 SKUs. V2 is charged based on consumption. See [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/) before migrating for pricing information.
### Cost efficiency guidance
The V2 SKU comes with a range of advantages such as a performance boost of 5x, i
There are 5 variants available in V1 SKU based on the Tier and Size - Standard_Small, Standard_Medium, Standard_Large, WAF_Medium and WAF_Large.
-| SKU | v1 Fixed Price/mo | v2 Fixed Price/mo | Recommendation|
+| SKU | V1 Fixed Price/mo | V2 Fixed Price/mo | Recommendation|
| - |:-:|:--:|:--: | |Standard Medium | 102.2 | 179.8|V2 SKU can handle a larger number of requests than a V1 gateway, so we recommend consolidating multiple V1 gateways into a single V2 gateway, to optimize the cost. Ensure that consolidation doesnΓÇÖt exceed the Application Gateway [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#application-gateway-limits). We recommend 3:1 consolidation. | | WAF Medium | 183.96 | 262.8 |Same as for Standard Medium |
Common questions on migration can be found [here](./retirement-faq.md#faq-on-v1-
## Next steps
-[Learn about Application Gateway v2](application-gateway-autoscaling-zone-redundant.md)
+[Learn about Application Gateway V2](application-gateway-autoscaling-zone-redundant.md)
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md
description: This article describes how to set up a user-assigned managed identi
Previously updated : 10/26/2021 Last updated : 05/01/2022
This article shows you how to add a user-assigned managed identity for an Azure Automation account and how to use it to access other resources. For more information on how managed identities work with Azure Automation, see [Managed identities](automation-security-overview.md#managed-identities). > [!NOTE]
-> **User-assigned managed identities (UAMI) are in general supported for Azure jobs only.** One other scenario in which user-assigned managed identities (UAMI) run successfully in Hybrid Workers is, when only the Hybrid Worker VM has a UAMI assigned (i.e., the Automation Account can't have any UAMI assigned, otherwise the VM UAMI will fail authenticating).
+> It is not possible to use a User Assigned Managed Identity on a Hybrid Runbook Worker when a Managed Identity (either System or User assigned) has been created for the Automation Account. If Managed Identity has not been assigned to the Automation Account, then it is possible to use the VMΓÇÖs System or User Assigned Managed Identity on a Hybrid Runbook Worker that is an Azure VM with the assigned managed identities.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
Example output:
You can connect to the above primary endpoint using SQL Server Management Studio and verify using DMVs as:
-```tsql
+```sql
SELECT * FROM sys.dm_hadr_availability_replica_states ```
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This section describes the new features introduced or enabled for this release.
- Restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group. - Connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
-## March 2021
-
-The March 2021 release was initially introduced on April 5th 2021, and the final stages of release were completed April 9th 2021.
-
-Azure Data CLI (`azdata`) version number: 20.3.2. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
-
-### Data controller
--- Deploy Azure Arc-enabled data services data controller in direct connect mode from the portal. Start from [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md).-
-### Azure Arc-enabled PostgreSQL server
-
-Both custom resource definitions (CRD) for PostgreSQL have been consolidated into a single CRD. See the following table.
-
-|Release |CRD |
-|--|--|
-|February 2021 and prior| postgresql-11s.arcdata.microsoft.com<br/>postgresql-12s.arcdata.microsoft.com |
-|Beginning March 2021 | postgresqls.arcdata.microsoft.com
-
-### Azure Arc-enabled SQL Managed Instance
--- You can now create a SQL Managed Instance from the Azure portal in the direct connected mode.--- You can now restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group.--- You can now connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.-
-## February 2021
-
-### New capabilities and features
-
-Azure Data CLI (`azdata`) version number: 20.3.1. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
-
-Additional updates include:
--- Azure Arc-enabled SQL Managed Instance
- - High availability with Always On availability groups
--- Azure Arc-enabled PostgreSQL server
- Azure Data Studio:
- - The overview page shows the status of the server group itemized per node
- - A new properties page shows more details about the server group
- - Configure Postgres engine parameters from **Node Parameters** page
-
-## January 2021
-
-### New capabilities and features
-
-Azure Data CLI (`azdata`) version number: 20.3.0. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
-
-Additional updates include:
-- Localized portal available for 17 new languages-- Minor changes to Kube-native .yaml files-- New versions of Grafana and Kibana-- Issues with Python environments when using azdata in notebooks in Azure Data Studio resolved-- The pg_audit extension is now available for PostgreSQL server-- A backup ID is no longer required when doing a full restore of a PostgreSQL server database-- The status (health state) is reported for each of the PostgreSQL instances in a server group-
- In earlier releases, the status was aggregated at the server group level and not itemized at the PostgreSQL node level.
--- PostgreSQL deployments honor the volume size parameters indicated in create commands-- The engine version parameters are now honored when editing a server group-- The naming convention of the pods for Azure Arc-enabled PostgreSQL server has changed-
- It is now in the form: `ServergroupName{c, w}-n`. For example, a server group with three nodes, one coordinator node and two worker nodes is represented as:
- - `Postgres01c-0` (coordinator node)
- - `Postgres01w-0` (worker node)
- - `Postgres01w-1` (worker node)
-
-## December 2020
-
-### New capabilities & features
-
-Azure Data CLI (`azdata`) version number: 20.2.5. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
-
-View endpoints for SQL Managed Instance and PostgreSQL server using the Azure Data CLI (`azdata`) with `azdata arc sql endpoint list` and `azdata arc postgres endpoint list` commands.
-
-Edit SQL Managed Instance resource (CPU core and memory) requests and limits using Azure Data Studio.
-
-Azure Arc-enabled PostgreSQL server now supports point in time restore in addition to full backup restore for both versions 11 and 12 of PostgreSQL. The point in time restore capability allows you to indicate a specific date and time to restore to.
-
-The naming convention of the pods for Azure Arc-enabled PostgreSQL server has changed. It is now in the form: ServergroupName{r, s}-_n_. For example, a server group with three nodes, one coordinator node and two worker nodes is represented as:
-- `postgres02r-0` (coordinator node)-- `postgres02s-0` (worker node)-- `postgres02s-1` (worker node)-
-### Breaking change
-
-#### New resource provider
-
-This release introduces an updated [resource provider](../../azure-resource-manager/management/azure-services-resource-providers.md) called `Microsoft.AzureArcData`. Before you can use this feature, you need to register this resource provider.
-
-To register this resource provider:
-
-1. In the Azure portal, select **Subscriptions**
-2. Choose your subscription
-3. Under **Settings**, select **Resource providers**
-4. Search for `Microsoft.AzureArcData` and select **Register**
-
-You can review detailed steps at [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). This change also removes all the existing Azure resources that you have uploaded to the Azure portal. In order to use the resource provider, you need to update the data controller and use the latest `azdata` CLI.
-
-### Platform release notes
-
-#### Direct connectivity mode
-
-This release introduces direct connectivity mode. Direct connectivity mode enables the data controller to automatically upload the usage information to Azure. As part of the usage upload, the Arc data controller resource is automatically created in the portal, if it is not already created via `azdata` upload.
-
-You can specify direct connectivity when you create the data controller. The following example creates a data controller with `az arcdata dc create` named `arc` using direct connectivity mode (`connectivity-mode direct`). Before you run the example, replace `<subscription id>` with your subscription ID.
-
-```azurecli
-az arcdata dc create --profile-name azure-arc-aks-hci --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group my-resource-group --location eastus --connectivity-mode direct
-```
-
-## October 2020
-
-Azure Data CLI (`azdata`) version number: 20.2.3. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
-
-### Breaking changes
-
-This release introduces the following breaking changes:
-
-* In the PostgreSQL custom resource definition (CRD), the term `shards` is renamed to `workers`. This term (`workers`) matches the command-line parameter name.
-
-* `azdata arc postgres server delete` prompts for confirmation before deleting a postgres instance. Use `--force` to skip prompt.
-
-### Additional changes
-
-* A new optional parameter was added to `azdata arc postgres server create` called `--volume-claim mounts`. The value is a comma-separated list of volume claim mounts. A volume claim mount is a pair of volume type and PVC name. The only volume type currently supported is `backup`. In PostgreSQL, when volume type is `backup`, the PVC is mounted to `/mnt/db-backups`. This enables sharing backups between PostgreSQL instances so that the backup of one PostgreSQL instance can be restored in another instance.
-
-* New short names for PostgreSQL custom resource definitions:
- * `pg11`
- * `pg12`
-* Telemetry upload provides user with either:
- * Number of points uploaded to Azure
- or
- * If no data has been loaded to Azure, a prompt to try it again.
-* `az arcdata dc debug copy-logs` now also reads from `/var/opt/controller/log` folder and collects PostgreSQL engine logs on Linux.
-* Display a working indicator during creating and restoring backup with PostgreSQL server.
-* `azdata arc postrgres backup list` now includes backup size information.
-* SQL Managed Instance admin name property was added to right column of overview blade in the Azure portal.
-* Azure Data Studio supports configuring number of worker nodes, vCore, and memory settings for PostgreSQL server.
-* Preview supports backup/restore for Postgres version 11 and 12.
-
-## September 2020
-
-Azure Arc-enabled data services allow you to manage data services anywhere. This is a preview release.
--- SQL Managed Instance-- PostgreSQL server-
-For instructions see [What are Azure Arc-enabled data services?](overview.md)
- ## Next steps > **Just want to try things out?**
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md
public static string SayHello([ActivityTrigger] string name)
In the .NET-isolated worker, only serializable types representing your input are supported for the `[ActivityTrigger]`. ```csharp
-[FunctionName("SayHello")]
+[Function("SayHello")]
public static string SayHello([ActivityTrigger] string name) { return $"Hello {name}!";
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
public static async Task Run(
if (jobStatus == "Completed") { // Perform an action when a condition is met.
- await context.CallActivityAsync("SendAlert", machineId);
+ await context.CallActivityAsync("SendAlert", jobId);
break; }
public static async Task Run(
if (jobStatus == "Completed") { // Perform an action when a condition is met.
- await context.CallActivityAsync("SendAlert", machineId);
+ await context.CallActivityAsync("SendAlert", jobId);
break; }
module.exports = df.orchestrator(function*(context) {
const jobStatus = yield context.df.callActivity("GetJobStatus", jobId); if (jobStatus === "Completed") { // Perform an action when a condition is met.
- yield context.df.callActivity("SendAlert", machineId);
+ yield context.df.callActivity("SendAlert", jobId);
break; }
azure-functions Functions Machine Learning Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-machine-learning-tensorflow.md
Navigate to the *start* folder and run the following commands to create and acti
```bash cd start
-```
-
-```bash
python -m venv .venv
-```
-
-```bash
source .venv/bin/activate ```
sudo apt-get install python3-venv
```powershell cd start
-```
-
-```powershell
py -3.7 -m venv .venv
-```
-
-```powershell
.venv\scripts\activate ```
py -3.7 -m venv .venv
```cmd cd start
-```
-
-```cmd
py -3.7 -m venv .venv
-```
-
-```cmd
.venv\scripts\activate ```
azure-functions Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/ip-addresses.md
nslookup <APP_NAME>.azurewebsites.net
Each function app has a set of available outbound IP addresses. Any outbound connection from a function, such as to a back-end database, uses one of the available outbound IP addresses as the origin IP address. You can't know beforehand which IP address a given connection will use. For this reason, your back-end service must open its firewall to all of the function app's outbound IP addresses.
+> [!TIP]
+> For some platform-level features such as [Key Vault references](../app-service/app-service-key-vault-references.md), the origin IP might not be one of the outbound IPs, and you should not configure the target resource to rely on these specific addresses. It is recommended that the app instead use a virtual network integration, as the platform will route traffic to the target resource through that network.
+ To find the outbound IP addresses available to a function app: # [Azure portal](#tab/portal)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | VM Insights | X (Public preview) | X | | | | Microsoft Defender for Cloud | X (Public preview) | X | | | | Automation Update Management | | X | |
-| | Update Management | X (Public preview, independent of monitoring agents) | | |
+| | Update Management Center | N/A (Public preview, independent of monitoring agents) | | |
| | Change Tracking | X (Public preview) | X | | | | SQL Best Practices Assessment | X | | |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Microsoft Sentinel | X ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | X | | | | VM Insights | X (Public preview) | X | | | | Microsoft Defender for Cloud | X (Public preview) | X | |
-| | Update Management | X (Public preview, independent of monitoring agents) | X | |
+| | Automation Update Management | | X | |
+| | Update Management Center | N/A (Public preview, independent of monitoring agents) | | |
| | Change Tracking | X (Public preview) | X | | <sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
View [supported operating systems for Azure Arc Connected Machine agent](../../a
> [!NOTE] > CBL-Mariner 2.0's disk size is by default around 1GB to provide storage COGS savings, compared to other Azure VMs that are around 30GB. However, the Azure Monitor Agent requires at least 4GB disk size in order to install and run successfully. Please check out [CBL-Mariner's documentation](https://eng.ms/docs/products/mariner-linux/gettingstarted/azurevm/azurevm#disk-size) for more information and instructions on how to increase disk size before installing the agent.
+### Linux Hardening Standards
+
+The Azure Monitoring Agent for Linux now officially supports various hardening standards for Linux operating systems and distros. Every release of the agent is tested and certified against the supported hardening standards. We test against the images that are publicly available on the Azure Marketplace and published by CIS and only support the settings and hardening that are applied to those images. If you apply additional customizations on your own golden images, and those settings are not covered by the CIS images, it will be considered a non-supported scenario.
+
+*Only the Azure Monitoring Agent for Linux will support these hardening standards. There are no plans to support this in the Log Analytics Agent (legacy) or the Diagnostics Extension*
+
+Currently supported hardening standards:
+- SELinux
+- CIS Lvl 1 and 2<sup>1</sup>
+
+On the roadmap
+- STIG
+- FIPs
+
+| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>|
+|:|::|::|::|
+| CentOS Linux 7 | X | | |
+| Debian 10 | X | | |
+| Ubuntu 18 | X | | |
+| Ubuntu 20 | X | | |
+| Red Hat Enterprise Linux Server 7 | X | | |
+| Red Hat Enterprise Linux Server 8 | X | | |
+
+<sup>1</sup> Supports only the above distros and versions
+ ## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
> - If you're setting up a new environment with resources, such as deployment scripts and onboarding templates, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later. > - If you have two agents on the same machine, avoid collecting duplicate data.<br> Collecting duplicate data from the same machine can skew query results, affect downstream features like alerts, dashboards, and workbooks, and generate extra charges for data ingestion and retention.<br> > **To avoid data duplication:**
- > - Configure the agents to send the data to different workspaces or different tables in the same workspace.
- > - Disable duplicate data collection from legacy agents by [removing the workspace configurations](./agent-data-sources.md#configure-data-sources).
- > - Defender for Cloud natively deduplicates data when you use both agents, and [you'll be billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when you run the agents side by side.
- > - For Sentinel, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
+> - Configure the agents to send the data to different workspaces or different tables in the same workspace.
+> - Disable duplicate data collection from legacy agents by [removing the workspace configurations](./agent-data-sources.md#configure-data-sources).
+> - Defender for Cloud natively deduplicates data when you use both agents, and [you'll be billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when you run the agents side by side.
+> - For Sentinel, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
### Migration steps
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines.
-1. **Verify** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly:
+1. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly:
1. Look at the **Overview** and **Usage** tabs of [Log Analytics Workspace Insights](../logs/log-analytics-workspace-overview.md) for spikes or dips in ingestion rates following the migration. Check both the overall workspace ingestion and the table-level ingestion rates. 1. Check your workbooks, dashboards, and alerts for variances from typical behavior following the migration.
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md
Title: Collect text logs with the Log Analytics agent in Azure Monitor description: Azure Monitor can collect events from text files on both Windows and Linux computers. This article describes how to define a new custom log and details of the records they create in Azure Monitor. -- Previously updated : 02/07/2022-++ Last updated : 05/03/2023+
A Log Analytics workspace supports the following limits:
>[!IMPORTANT] >Custom log collection requires that the application writing the log file flushes the log content to the disk periodically. This is because the custom log collection relies on filesystem change notifications for the log file being tracked.
-## Define a custom log
+## Define a custom log table
-Use the following procedure to define a custom log file. Scroll to the end of this article for a walkthrough of a sample of adding a custom log.
+Use the following procedure to define a custom log table. Scroll to the end of this article for a walkthrough of a sample of adding a custom log.
### Open the Custom Log wizard The Custom Log wizard runs in the Azure portal and allows you to define a new custom log to collect.
-1. In the Azure portal, select **Log Analytics workspaces** > your workspace.
-1. Under the **Classic** section, select **Legacy custom logs**.
-1. By default, all configuration changes are automatically pushed to all agents. For Linux agents, a configuration file is sent to the Fluentd data collector.
-1. Select **Add** to open the Custom Log wizard.
+1. In the Azure portal, select **Log Analytics workspaces** > your workspace > **Tables**.
+1. Select **Create** and then **New custom log (MMA-based)**.
+
+ By default, all configuration changes are automatically pushed to all agents. For Linux agents, a configuration file is sent to the Fluentd data collector.
+ ### Upload and parse a sample log
If a timestamp delimiter is used, the TimeGenerated property of each record stor
1. Select **Browse** and browse to a sample file. This button might be labeled **Choose File** in some browsers. 1. Select **Next**.
-1. The Custom Log wizard uploads the file and lists the records that it identifies.
+
+ The Custom Log wizard uploads the file and lists the records that it identifies.
+ 1. Change the delimiter that's used to identify a new record. Select the delimiter that best identifies the records in your log file. 1. Select **Next**.
After Azure Monitor starts collecting from the custom log, its records will be a
The entire log entry will be stored in a single property called **RawData**. You'll most likely want to separate the different pieces of information in each entry into individual properties for each record. For options on parsing **RawData** into multiple properties, see [Parse text data in Azure Monitor](../logs/parse-text.md).
-## Remove a custom log
-
-Use the following process in the Azure portal to remove a custom log that you previously defined.
+## Delete a custom log table
-1. On the left, under the **Classic** section for your workspace, select **Legacy custom Logs** to list all your custom logs.
-1. Select **Remove** next to the custom log to remove the log.
+See [Delete a table](../logs/create-custom-table.md#delete-a-table).
## Data collection
azure-monitor Use Azure Monitor Agent Troubleshooter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/use-azure-monitor-agent-troubleshooter.md
+
+ Title: Use Azure Monitor Troubleshooter
+description: Detailed instructions on using the on agent monitoring tool to diagnose potential issue.
+++ Last updated : 4/28/2023+++
+# customer-intent: As an IT manager, I want to investigate agent issue on a particular virtual machine and determine if I can resolve the issue on my own.
+
+# Use the Azure Monitor Agent Troubleshooter
+The Azure Monitor Agent isn't a service that runs in the context of an Azure Resource Provider. It may even be running in on premise machines within a customer network boundary. The Azure Monitor Agent Troubleshooter is designed to help diagnose issues with the agent, and general agent health checks. It can run checks to verify agent installation, connection, general heartbeat, and collect AMA-related logs automatically from the affected Windows or Linux VM. More scenarios will be added over time to increase the number of issues that can be diagnosed.
+> [!Note]
+> Note: Troubleshooter is a command line executable that is shipped with the agent for all versions newer than **1.12.0.0** for Windows and **1.25.1 for Linux**.
+> If you have a older version of the agent, you can not copy the Troubleshooter on in to a VM to diagnose an older agent.
++
+## Prerequisites
+The linux Troubleshooter requires Python 2.6+ or any Python3 installed on the machine. In addition, the following Python packages are required to run (all should be present on a default install of Python2 or Python3):
+
+|Python Package| Required for Python2? |Required for Python3?|
+|:|:|:|
+|copy| yes| yes|
+|datetime| yes| yes|
+|json| yes| yes|
+|os| yes| yes|
+|platform| yes| yes|
+|re| yes| yes|
+|requests| no| yes|
+|shutil| yes| yes|
+|subprocess| yes| yes|
+|url lib| yes| no|
+|xml.dom.minidom| yes| yes|
+
+## Windows Troubleshooter
+### Run Windows Troubleshooter
+1. Log in to the machine to be diagnosed
+2. Go to the location where the troubleshooter is automatically installed: C:/Packages/Plugins/Microsoft.Azure.Monitor.AzureMonitorWindowsAgent/{version}/Troubleshooter
+3. Run the Troubleshooter: > Troubleshooter --ama
+
+### Evaluate the Windows Results
+The Troubleshooter runs two tests and collects several diagnostic logs.
+
+|Test| Description|
+|:|:|
+|Machine Network Configuration (Configuration) | This test checks the basic network connection including IPV 4 and IPV 6 address resolutions. If IPV6 isn't available on the machine, you see a warning.|
+|Connection to Control Plan (MCS) | This test checks to see if the agent configuration information can be retrieved from the central data control plan. Controlling information includes which source data to collect and where it should be sent to. All agent configuration is done through Data Collection Rules.|
++
+### Share the Windows Results
+The detailed data collected by the troubleshooter include system configuration, network configuration, environment variables, and agent configuration that can aid the customer in finding any issues. The troubleshooter make is easy to send this data to customer support by creating a Zip file that should be attached to any customer support request. The file is located in C:/Packages/Plugins/Microsoft.Azure.Monitor.AzureMonitorWindowsAgent/{version}/Troubleshooter. The agent logs can be cryptic but they can give you insight into problems that you may be experiencing.
+
+|Logfile | Contents|
+|:|:|
+|Curl.exe | results of basic network connectivity for the agent using the Curl command that isn't dependent on any agent software. |
+|AgentProcesses | checks that all the agent processes are running and collects the environment variables that were used for each process. |
+|NetworkDiagnositc | this file has information on the SSL version and certificate information.|
+|Table2csv.exe | snapshot of all the data streams and tables that are configured in the agent along with general information about the time range over which events were seen. |
+|ImdsMetadataResponse.json | contains the results of the request for Instance Metadata Service that contains information about the VM on which the agent is running. |
+|TroubleshootingLogs | contains a useful table in the Customer Data Statistics section for events that were collected in each local table over different time buckets. |
++
+## Linux Troubleshooter
+### Run Linux Troubleshooter
+1. Log in to the machine to be diagnosed
+2. Go to the location where the troubleshooter is automatically installed: cd /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-{version}/ama_tst
+3. Run the Troubleshooter: sudo sh ama_troubleshooter.sh A
+
+There are six sections that cover different scenarios that customers have historically had issues with. By enter 1-6 or A, customer is able to diagnose issues with the agent. Adding an L creates a zip file that can be shared if technical support in needed.
+
+### Evaluate Linux Results
+The details for the covered scenarios are below:
+
+|Scenario | Tests|
+|:|:|
+|Agent having installation issues|Supported OS / version, Available disk space, Package manager is available (dpkg/rpm), Submodules are installed successfully, AMA installed properly, Syslog available (rsyslog/syslog-ng), Using newest version of AMA, Syslog user generated successfully|
+|Agent doesn't start, can't connect to Log Analytics|AMA parameters set up, AMA DCR created successfully, Connectivity to endpoints, Submodules started, IMDS/HIMDS metadata and MSI tokens available|
+|Agent is unhealthy, heartbeat doesn't work properly|Submodule status, Parse error files|
+|Agent has high CPU / memory usage|Check logrotate, Monitor CPU/memory usage in 5 minutes (interaction mode only)|
+|Agent syslog collection doesn't work properly|Rsyslog / syslog-ng setup and running, Syslog configuration being pulled / used, Syslog socket is accessible|
+|Agent custom log collection doesn't work properly|Custom log configuration being pulled / used, Log file paths is valid|
+
+### Share Linux Logs
+To create a zip file use this command when running the troubleshooter: sudo sh ama_troubleshooter.sh A L. You'll be asked for a file location to create the zip file.
+
+## Next steps
+- [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You can use `${...}` to read the value from the specified environment variable a
## Inherited attribute (preview)
-Starting from version 3.2.0, if you want to set a custom dimension programmatically on your request telemetry and have it inherited by dependency telemetry that follows:
+Starting from version 3.2.0, if you want to set a custom dimension programmatically on your request telemetry
+and have it inherited by dependency and log telemetry which are captured in the context of that request:
```json {
Starting from version 3.2.0, if you want to set a custom dimension programmatica
} ```
+and then at the beginning of each request, call:
+
+```java
+Span.current().setAttribute("mycustomer", "xyz");
+```
+
+Also see: [Add a custom property to a Span](./opentelemetry-enable.md?tabs=java#add-a-custom-property-to-a-span).
+ ## Connection string overrides (preview) This feature is in preview, starting from 3.4.0.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
See [configuring the connection string](./java-standalone-config.md#connection-s
The rest of this document describes limitations and changes that you may encounter when upgrading from 2.x to 3.x, as well as some workarounds that you may find helpful.
-## TelemetryInitializers and TelemetryProcessors
+## TelemetryInitializers
-The 2.x SDK TelemetryInitializers and TelemetryProcessors will not be run when using the 3.x agent.
-Many of the use cases that previously required these can be solved in Application Insights Java 3.x
-by configuring [custom dimensions](./java-standalone-config.md#custom-dimensions)
-or configuring [telemetry processors](./java-standalone-telemetry-processors.md).
+2.x SDK TelemetryInitializers will not be run when using the 3.x agent.
+Many of the use cases that previously required writing a `TelemetryInitializer` can be solved in Application Insights Java 3.x
+by configuring [custom dimensions](./java-standalone-config.md#custom-dimensions).
+or using [inherited attributes](./java-standalone-config.md#inherited-attribute-preview).
+
+## TelemetryProcessors
+
+2.x SDK TelemetryProcessors will not be run when using the 3.x agent.
+Many of the use cases that previously required writing a `TelemetryProcessor` can be solved in Application Insights Java 3.x
+by configuring [sampling overrides](./java-standalone-config.md#sampling-overrides-preview).
## Multiple applications in a single JVM
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
Check the data flow by going to the Azure portal and navigating to the Applicati
Additionally, you can use the SDK's trackPageView() method to manually send a page view event and verify that it appears in the portal.
-If you can't run the application or you aren't getting data as expected, wee the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
+If you can't run the application or you aren't getting data as expected, see the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
### Analytics
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Autocollected requests:
* Kafka consumers * Netty * Quartz
+* RabbitMQ
* Servlets * Spring scheduling
- > [!NOTE]
- > Servlet and Netty auto-instrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
+> [!NOTE]
+> Servlet and Netty auto-instrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
Autocollected dependencies (plus downstream distributed trace propagation):
Autocollected dependencies (plus downstream distributed trace propagation):
* Kafka * Netty client * OkHttp
+* RabbitMQ
Autocollected dependencies (without downstream distributed trace propagation):
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
See the [Activity log settings](#activity-log-settings) section.
Platform logs and metrics can be sent to the destinations listed in the following table.
-To ensure the security of data in transit, we strongly encourage you to configure Transport Layer Security (TLS). All destination endpoints support TLS 1.2.
+To ensure the security of data in transit, all destination endpoints are configured to support TLS 1.2.
| Destination | Description | |:|:|
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
To create a custom table, you need:
Azure tables have predefined schemas. To store log data in a different schema, use data collection rules to define how to collect, transform, and send the data to a custom table in your Log Analytics workspace. > [!NOTE]
-> For information about creating a custom table for logs you ingest with the deprecated Log Analytics agent, also known as MMA or OMS, see [Collect text logs with the Log Analytics agent](../agents/data-sources-custom-logs.md#define-a-custom-log).
+> For information about creating a custom table for logs you ingest with the deprecated Log Analytics agent, also known as MMA or OMS, see [Collect text logs with the Log Analytics agent](../agents/data-sources-custom-logs.md#define-a-custom-log-table).
# [Portal](#tab/azure-portal-1)
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 04/04/2023 Last updated : 05/03/2023
This article lists significant changes to Azure Monitor documentation.
+## April 2023
+
+|Subservice| Article | Description |
+||||
+Agents|[Azure Monitor Agent Performance Benchmark](agents/azure-monitor-agent-performance.md)|Added performance benchmark data for the scenario of using Azure Monitor Agent to forward data to a gateway.|
+Agents|[Troubleshoot issues with the Log Analytics agent for Windows](agents/agent-windows-troubleshoot.md)|Log Analytics will no longer accept connections from MMA versions that use old root CAs (MMA versions prior to the Winter 2020 release for Log Analytics agent, and prior to SCOM 2019 UR3 for SCOM). |
+Agents|[Azure Monitor Agent overview](agents/agents-overview.md)|Log Analytics agent supports Windows Server 2022. |
+Alerts|[Common alert schema](alerts/alerts-common-schema.md)|Updated alert payload common schema to include custom properties.|
+Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Clarified use of basic auth in webhook.|
+Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|We've made it easier to understand where to find iLogger telemetry.|
+Application-Insights|[Set up Azure Monitor for your Python application](app/opencensus-python.md)|Updated telemetry type mappings code sample.|
+Application-Insights|[Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](app/javascript-feature-extensions.md)|Code samples updated to use connection strings.|
+Application-Insights|[Connection strings](app/sdk-connection-string.md)|Code samples updated for .NET 6/7.|
+Application-Insights|[Live Metrics: Monitor and diagnose with 1-second latency](app/live-stream.md)|Code samples updated for .NET 6/7.|
+Application-Insights|[Geolocation and IP address handling](app/ip-collection.md)|The PowerShell 'Update-AzApplicationInsights' code sample to disable IP masking has been updated.|
+Application-Insights|[Application Insights for Worker Service applications (non-HTTP applications)](app/worker-service.md)|The .NET Core app scenario chart has been updated.|
+Application-Insights|[Azure AD authentication for Application Insights](app/azure-ad-authentication.md)|Linked information on how to query Application Insights using Azure AD Authentication.|
+Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](app/opentelemetry-enable.md)|Java guidance and code samples have been updated.|
+Autoscale|[Configure autoscale with PowerShell](autoscale/autoscale-using-powershell.md)|New Article: Configure autoscale using PowerShell|
+Autoscale|[Get started with autoscale in Azure](autoscale/autoscale-get-started.md)|Refreshed article|
+Containers|[Monitor an Azure Kubernetes Service cluster using Container insights in Azure Monitor](/training/modules/aks-monitor/)|New Learn module: Monitor an Azure Kubernetes Service cluster using Container insights in Azure Monitor.|
+Containers|[Manage the Container insights agent](containers/container-insights-manage-agent.md)|Semantic version update of container insights agent version|
+Essentials|[Azure Monitor Metrics overview](essentials/data-platform-metrics.md)|New Batch Metrics API that allows multiple resource requests and reducing throttling found in the non-batch version. |
+General|[Cost optimization in Azure Monitor](best-practices-cost.md)|Rewritten to match organization of Well Architected Framework service guides|
+General|[Best practices for Azure Monitor Logs](best-practices-logs.md)|New article with consolidated list of best practices for Logs organized by WAF pillar.|
+General|[Migrate from System Center Operations Manager (SCOM) to Azure Monitor](azure-monitor-operations-manager.md)|Migrate from SCOM to Azure Monitor|
+Logs|[Application Insights API Access with Microsoft Azure Active Directory (Azure AD) Authentication](logs/api/app-insights-azure-ad-api.md)|New article that explains how to authenticate and access the Azure Monitor Application Insights APIs using Azure AD.|
+Logs|[Tutorial: Replace custom fields in Log Analytics workspace with KQL-based custom columns](logs/custom-fields-migrate.md)|Guidance for migrate legacy custom fields to KQL-based custom columns using transformations.|
+Logs|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|View Log Analytics workspace health metrics, including query success metrics, directly from the Log Analytics workspace screen in the Azure portal.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Dedicated SQL Pool tables and Kubernetes services tables now support Basic logs.|
+Logs|[Set daily cap on Log Analytics workspace](logs/daily-cap.md)|Updated daily cap functionality for workspace-based Application Insights.|
+Profiler|[View Application Insights Profiler data](profiler/profiler-data.md)|Clarified this section based on user feedback.|
+Snapshot-Debugger|[Debug snapshots on exceptions in .NET apps](snapshot-debugger/snapshot-collector-release-notes.md)|Removed "how to view" sections and move into its own doc.|
+Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Updated link for release notes to the "Release notes" section in the Snapshot Debugger overview.|
+Snapshot-Debugger|[View Application Insights Snapshot Debugger data](snapshot-debugger/snapshot-debugger-data.md)|Created this new doc for viewing snapshots from content taken from the overview.|
+Snapshot-Debugger|[Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](snapshot-debugger/snapshot-debugger-function-app.md)|Updated link for release notes to the "Release notes" section in the Snapshot Debugger overview.|
+Snapshot-Debugger|[Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](snapshot-debugger/snapshot-debugger-troubleshoot.md)|Updated link for release notes to the "Release notes" section in the Snapshot Debugger overview.|
+Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines](snapshot-debugger/snapshot-debugger-vm.md)|Updated link for release notes to the "Release notes" section in the Snapshot Debugger overview.|
+Snapshot-Debugger|[Debug snapshots on exceptions in .NET apps](snapshot-debugger/snapshot-debugger.md)|Moved the release notes to the end of the Snapshot Debugger overview doc to improve page metrics.|
+Snapshot-Debugger|[What's new in Azure Monitor documentation](whats-new.md)|Updated link for release notes to the "Release notes" section in the Snapshot Debugger overview.|
+Snapshot-Debugger|[Debug snapshots on exceptions in .NET apps](snapshot-debugger/snapshot-debugger.md)|Updated .NET availability for Snapshot Debugger to avoid ".NET Core" and "LTS" language.|
+Snapshot-Debugger|[Debug snapshots on exceptions in .NET apps](snapshot-debugger/snapshot-debugger.md)|Added release notes for the 1.4.4 point release addressing user-reported bugs.|
+ ## March 2023 |Subservice| Article | Description |
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
na Previously updated : 03/07/2023 Last updated : 05/03/2023
The following diagram demonstrates how customer-managed keys work with Azure Net
1. Azure NetApp Files grants permissions to encryption keys to a managed identity. The managed identity is either a user-assigned managed identity that you create and manage or a system-assigned managed identity associated with the NetApp account. 2. You configure encryption with a customer-managed key for the NetApp account.
-3. You use the managed identity to which the Azure Key Vault admin granted permissions in step one to authenticate access to Azure Key Vault via Azure Active Directory.
+3. You use the managed identity to which the Azure Key Vault admin granted permissions in step 1 to authenticate access to Azure Key Vault via Azure Active Directory.
4. Azure NetApp Files wraps the account encryption key with the customer-managed key in Azure Key Vault. Customer-managed keys have no performance impact on Azure NetApp Files. Its only difference from Microsoft-managed keys is how the key is managed.
The following diagram demonstrates how customer-managed keys work with Azure Net
* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * Customer-managed keys private endpoints do not support the **Disable public access** option. You must choose one of the **Allow public access** options.
-* Switching from user-assigned identity to the system-assigned identity isn't currently supported.
* MSI Automatic certificate renewal isn't currently supported. * The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.**
- * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message will communicate the date of eligibility.
+ * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility.
* Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example: `az netappfiles account renew-credentials ΓÇô-account-name myaccount ΓÇôresource-group myresourcegroup`
- * If the account isn't eligible for MSI certificate renewal, an error will communicate the date and time when the account is eligible. It's recommended you run this operation periodically (for example, daily) to prevent the certificate from expiring and from the customer-managed key volume going offline.
+ * If the account isn't eligible for MSI certificate renewal, an error message communicates the date and time when the account is eligible. It's recommended you run this operation periodically (for example, daily) to prevent the certificate from expiring and from the customer-managed key volume going offline.
* Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled. * If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information.
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
* The **Enter key URI** option allows you to enter manually the key URI. :::image type="content" source="../media/azure-netapp-files/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="../media/azure-netapp-files/key-enter-uri.png":::
-1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, then both options are available. Otherwise, only the user-assigned option is available.
+1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, both options are available. Otherwise, only the user-assigned option is available.
* If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically with the following process: A system-assigned identity is added to your NetApp account. An access policy is to be created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt. :::image type="content" source="../media/azure-netapp-files/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="../media/azure-netapp-files/encryption-system-assigned.png":::
- * If you choose **User-assigned**, you must select an identity to use. Choosing **Select an identity** opens a context pane prompting you to select a user-assigned managed identity.
+ * If you choose **User-assigned**, you must select an identity. Choose **Select an identity** to open a context pane where you select a user-assigned managed identity.
:::image type="content" source="../media/azure-netapp-files/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="../media/azure-netapp-files/encryption-user-assigned.png":::
- If you've configured your Azure Key Vault use Vault access policy, the Azure portal configures the NetApp account automatically with the following process: The user-assigned identity you select is added to your NetApp account. An access policy is created on your Azure Key Vault with the key permissions Get, Encrypt, Decrypt.
+ If you've configured your Azure Key Vault to use Vault access policy, the Azure portal configures the NetApp account automatically with the following process: The user-assigned identity you select is added to your NetApp account. An access policy is created on your Azure Key Vault with the key permissions Get, Encrypt, Decrypt.
If you've configure your Azure Key Vault to use Azure role-based access control, then you need to make sure the selected user-assigned identity has a role assignment on the key vault with permissions for data actions: * `Microsoft.KeyVault/vaults/keys/read`
You can use an Azure Key Vault that is configured to use Azure role-based access
1. `Microsoft.KeyVault/vaults/keys/encrypt/action` 1. `Microsoft.KeyVault/vaults/keys/decrypt/action`
- Although there are pre-defined roles with these permissions, they grant more privileges than are required. For the minimum level of privileges, you should create a custom role with only the required permissions. For details, see [Azure custom roles](../role-based-access-control/custom-roles.md).
+ Although there are predefined roles that include these permissions, those roles grant more privileges than are required. It's recommended that you create a custom role with only the minimum required permissions. For more information, see [Azure custom roles](../role-based-access-control/custom-roles.md).
```json {
You can use an Azure Key Vault that is configured to use Azure role-based access
} ```
-1. Once the custom role is created and available to use with the key vault, you can add a role assignment for your user-assigned identity.
+1. Once the custom role is created and available to use with the key vault, you apply it to the user-assigned identity.
:::image type="content" source="../media/azure-netapp-files/rbac-review-assign.png" alt-text="Screenshot of RBAC review and assign menu." lightbox="../media/azure-netapp-files/rbac-review-assign.png":::
You can use an Azure Key Vault that is configured to use Azure role-based access
## Rekey all volumes under a NetApp account
-If you have already configured your NetApp account for customer-managed keys and has one or more volumes encrypted with customer-managed keys, you can change the key that is used to encrypt all volumes under the NetApp account. You can select any key that is in the same key vault, changing key vaults isn't supported.
+If you have already configured your NetApp account for customer-managed keys and have one or more volumes encrypted with customer-managed keys, you can change the key that is used to encrypt all volumes under the NetApp account. You can select any key that is in the same key vault. Changing key vaults isn't supported.
1. Under your NetApp account, navigate to the **Encryption** menu. Under the **Current key** input field, select the **Rekey** link. :::image type="content" source="../media/azure-netapp-files/encryption-current-key.png" alt-text="Screenshot of the encryption key." lightbox="../media/azure-netapp-files/encryption-current-key.png":::
If you have already configured your NetApp account for customer-managed keys and
1. Select **OK** to save. The rekey operation may take several minutes.
+## Switch from system-assigned to user-assigned identity
+
+To switch from system-assigned to user-assigned identity, you must grant the target identity access to the key vault being used with read/get, encrypt, and decrypt permissions.
+
+1. Update the NetApp account by sending a PATCH request using the `az rest` command:
+ ```azurecli
+ az rest -m PATCH -u <netapp-account-resource-id>?api-versions=2022-09-01 -b @path/to/payload.json
+ ```
+ The payload should use the following structure:
+ ```json
+ {
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<identity-resource-id>": {}
+ }
+ },
+ "properties": {
+ "encryption": {
+ "identity": {
+ "userAssignedIdentity": "<identity-resource-id>"
+ }
+ }
+ }
+ }
+ ```
+1. Confirm the operation completed successfully with the `az netappfiles account show` command. The output includes the following fields:
+ ```azurecli
+ "id": "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.NetApp/netAppAccounts/account",
+ "identity": {
+ "principalId": null,
+ "tenantId": null,
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity>": {
+ "clientId": "<client-id>",
+ "principalId": "<principalId>",
+ "tenantId": <tenantId>"
+ }
+ }
+ },
+ ```
+ Ensure that:
+ * `encryption.identity.principalId` matches the value in `identity.userAssignedIdentities.principalId`
+ * `encryption.identity.userAssignedIdentity` matches the value in `identity.userAssignedIdentities[]`
+
+ ```azurecli
+ "encryption": {
+ "identity": {
+ "principalId": "<principal-id>",
+ "userAssignedIdentity": "/subscriptions/<subscriptionId>/resourceGroups/<resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity>"
+ },
+ "KeySource": "Microsoft.KeyVault",
+ },
+ ```
+ ## Error messages and troubleshooting
-This section lists error messages and possible resolutions when Azure NetApp Files fails to configure customer-managed key encryption or create a volume using a customer-managed key.
+This section lists error messages and possible resolutions when Azure NetApp Files fails to configure customer-managed key encryption or create a volume using a customer-managed key.
### Errors configuring customer-managed key encryption on a NetApp account
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 04/6/2023 Last updated : 05/03/2023 # SMB FAQs for Azure NetApp Files
This article answers frequently asked questions (FAQs) about the SMB protocol of
Azure NetApp Files supports SMB 2.1 and SMB 3.1 (which includes support for SMB 3.0).
+## Does Azure NetApp Files support access to ΓÇÿoffline filesΓÇÖ on SMB volumes?
+
+Azure NetApp Files supports 'manual' offline files, allowing users on Windows clients to manually select files to be cached locally.
+ ## Is an Active Directory connection required for SMB access? Yes, you must create an Active Directory connection before deploying an SMB volume. The specified Domain Controllers must be accessible by the delegated subnet of Azure NetApp Files for a successful connection. See [Create an SMB volume](./azure-netapp-files-create-volumes-smb.md) for details.
azure-resource-manager Create Storage Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-storage-customer-managed-key.md
+
+ Title: Create Azure Managed Application that deploys storage account encrypted with customer-managed key
+description: This article describes how to create an Azure Managed Application that deploys a storage account encrypted with a customer-managed key.
+++ Last updated : 05/01/2023++
+# Create Azure Managed Application that deploys storage account encrypted with customer-managed key
+
+This article describes how to create an Azure Managed Application that deploys a storage account encrypted using a customer-managed key. Storage account, Cosmos DB, and Azure Database for Postgres support data encryption at rest using customer-managed keys or Microsoft-managed keys. You can use your own encryption key to protect the data in your storage account. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Customer-managed keys offer greater flexibility to manage access controls.
+
+## Prerequisites
+
+- An Azure account with an active subscription and permissions to Azure Active Directory resources like users, groups, or service principals. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). For Bicep files, install the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- Be familiar with how to [create](publish-service-catalog-app.md) and [deploy](deploy-service-catalog-quickstart.md) a service catalog definition.
+
+## Managed identities
+
+Configuring a customer-managed key for a storage account deployed by the managed application as a resource within the managed resource group requires a user-assigned managed identity. This user-assigned managed identity can be used to grant the managed application access to other existing resources. To learn how to configure your managed application with a user-assigned managed identity go to [Azure Managed Application with managed identity](publish-managed-identity.md).
+
+Your application can be granted two types of identities:
+
+- A **system-assigned managed identity** is assigned to your application and is deleted if your app is deleted. An app can only have one system-assigned managed identity.
+- A **user-assigned managed identity** is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned managed identities.
+
+To deploy a storage account in your managed application's managed resource group that's encrypted with customer keys from existing key vault, more configuration is required. The managed identity configured with your managed application needs the built-in Azure role-based access control _Managed Identity Operator_ over the managed identity that has access to the key vault. For more details, go to [Managed Identity Operator role](../../role-based-access-control/built-in-roles.md#managed-identity-operator).
+
+## Create a key vault with purge protection
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. From the Azure portal menu, or from the Home page, select **Create a resource**.
+1. In the Search box, enter _Key Vault_.
+1. From the results list, select **Key Vault**.
+1. On the **Key Vault** section, select **Create**.
+1. On the Create key vault section, provide the following information:
+ - **Subscription**: Select your subscription.
+ - **Resource Group**: Select **Create new** and enter a name like _demo-cmek-rg_.
+ - **Name**: A unique name is required, like _demo-keyvault-cmek_.
+ - **Region**: Select a location like East US.
+ - **Pricing tier**: Select _Standard_ from the drop-down list.
+ - **Purge protection**: Select _Enable purge protection_.
+1. Select **Next** and go to the **Access Policy** tab.
+ - **Access configuration**: Select _Azure role-based access control_.
+ - Accept the defaults for all the other options.
+1. Select **Review + create**.
+1. Confirm the settings are correct and select **Create**.
+
+After the successful deployment, select **Go to resource**. On the **Overview** tab, make note of the following properties:
+
+- **Vault Name**: In the example, the vault name is _demo-keyvault-cmek_. You use this name for other steps.
+- **Vault URI**: In the example, the vault URI is `https://demo-keyvault-cmek.vault.azure.net/`.
+
+## Create a user-assigned managed identity
+
+To create a user-assigned managed identity, your account needs the managed identity Contributor role assignment.
+
+1. In the search box, enter _managed identities_.
+1. Under Services, select **Managed Identities**.
+1. Select **Create** and enter the following values on the **Basics** tab:
+ - **Subscription**: Select your subscription.
+ - **Resource group**: Select the resource group _demo-cmek-rg_ that you created in the previous steps.
+ - **Region**: Select a region like East US.
+ - **Name**: Enter the name for your user-assigned managed identity, like _demokeyvaultmi_.
+1. Select **Review + create**.
+1. After **Validation Passed** is displayed, select **Create**.
+
+After a successful deployment, select **Go to resource**.
+
+## Create role assignments
+
+You need to create two role assignments for your key vault. For details, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+### Grant key permission on key vault to the managed identity
+
+Create a role assignment for the key vault managed identity _demokeyvaultmi_ to wrap and unwrap keys.
+
+1. Go to your key vault _demo-cmek-keyvault_.
+1. Select **Access control (IAM)**.
+1. Select **Add** > **Add role assignment**.
+1. Assign the following role:
+ - **Role**: Key Vault Crypto Service Encryption User
+ - **Assign Access to**: Managed identity
+ - **Member**: _demokeyvaultmi_
+1. Select **Review + assign** to view your settings.
+1. Select **Review + assign** to create the role assignment.
+
+### Create a role assignment for your account
+
+Create another role assignment so that your account can create a new key in your key vault.
+
+1. Assign the following role:
+ - **Role**: Key Vault Crypto Officer
+ - **Assign Access to**: User, group, or service principal
+ - **Member**: Your Azure Active Directory account
+1. Select **Review + assign** to view your settings.
+1. Select **Review + assign** to create the role assignment.
+
+You can verify the key vault's role assignments in **Access control (IAM)** > **Role assignments**.
+
+## Create a key
+
+You need to create a key that your key vault uses to encrypt a storage account.
+
+1. Go to your key vault, _demo-cmek-keyvault_.
+1. Select **Keys**.
+1. Select **Generate/Import**.
+1. On the **Create a key** page, select the following values:
+ - **Options**: Generate
+ - **Name**: _demo-cmek-key_
+1. Accept the defaults for the other options.
+1. Select **Create**.
+
+Make a note of the key name. You use it when you deploy the managed application.
+
+### Create a user-assigned managed identity for the managed application
+
+Create a user-assigned managed identity to be used as the managed identity for the managed application.
+
+1. In the search box, enter _Managed Identities_.
+1. Under Services, select **Managed Identities**.
+1. Select **Create**.
+ - **Subscription**: Select your subscription.
+ - **Resource group**: Select the resource group _demo-cmek-rg_.
+ - **Region**: Select a region like East US.
+ - **Name**: Enter the name for your user-assigned managed identity, like _demomanagedappmi_.
+1. Select **Review + create**.
+1. After **Validation Passed** is displayed, select **Create**.
+
+After a successful deployment, select **Go to resource**.
+
+## Assign role permission to managed identity
+
+Assign the _Managed Identity Operator_ role to the managed identity at the scope of the user-assigned managed identity named _demokeyvaultmi_.
+
+1. Go to the user-assigned managed identity named _demokeyvaultmi_.
+1. Select **Access control (IAM**).
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+1. Assign the following role.
+ - **Role**: Managed Identity Operator
+ - **Assign Access to**: Managed Identity
+ - **Member**: _demomanagedappmi_
+1. Select **Review + assign** to view your settings.
+1. Select **Review + assign** to create the role assignment.
+
+You can verify the role assignment for _demokeyvaultmi_ in **Access control (IAM)** > **Role assignments**.
+
+## Sample managed application template
+
+Create a managed application that deploys a storage account in a managed resource group and use a pre-existing key vault's key to encrypt the data in the storage account.
+
+To publish a managed application to your service catalog, do the following tasks:
+
+1. Create the [creatUIDefinition.json](#create-template-createuidefinitionjson) file from the sample in this article. The template defines the portal's user interface elements when deploying the managed application.
+1. Create an Azure Resource Manager template named [mainTemplate.json](#create-template-maintemplatejson) by converting the Bicep file in this article to JSON. The template defines the resources to deploy with the managed application.
+1. Create a _.zip_ package that contains the required JSON files: _createUiDefinition.json_ and _mainTemplate.json_.
+1. Publish the managed application definition so it's available in your service catalog. For more information, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).
+
+### Create template createUiDefinition.json
+
+ The following template creates a user-assigned managed identity for the managed application. In this example, we disable the system-assigned managed identity because we need our user-assigned managed identity to be configured in advance with the _Managed Identity Operator_ permissions over the key vault's managed identity.
+
+1. Create a new file in Visual Studio Code named _creatUIDefinition.json_.
+1. Copy and paste the following code into the file.
+1. Save the file.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#",
+ "handler": "Microsoft.Azure.CreateUIDef",
+ "version": "0.1.2-preview",
+ "parameters": {
+ "basics": [],
+ "steps": [
+ {
+ "name": "managedApplicationSetting",
+ "label": "Application Settings",
+ "subLabel": {
+ "preValidation": "Configure your application settings and Managed Identity for the application",
+ "postValidation": "Done"
+ },
+ "bladeTitle": "Application Settings - Config",
+ "elements": [
+ {
+ "name": "appIdentity",
+ "type": "Microsoft.ManagedIdentity.IdentitySelector",
+ "label": "Managed Identity Configuration for the Application (Needs Managed Identity Operator permissions over KV Managed Identity).",
+ "toolTip": {
+ "systemAssignedIdentity": "Enable system assigned identity to grant the managed application access to additional existing resources.",
+ "userAssignedIdentity": "Add user assigned identities to grant the managed application access to additional existing resources."
+ },
+ "defaultValue": {
+ "systemAssignedIdentity": "Off"
+ },
+ "options": {
+ "hideSystemAssignedIdentity": true,
+ "hideUserAssignedIdentity": false,
+ "readOnlySystemAssignedIdentity": true
+ },
+ "visible": true
+ }
+ ]
+ },
+ {
+ "name": "configuration",
+ "type": "Microsoft.Common.Section",
+ "label": "Configuration",
+ "elements": [
+ {
+ "name": "cmek",
+ "type": "Microsoft.Common.Section",
+ "label": "Customer Managed Encryption Key (CMEK)",
+ "elements": [
+ {
+ "name": "cmekEnable",
+ "type": "Microsoft.Common.CheckBox",
+ "label": "Enable CMEK",
+ "toolTip": "Enable to provide a CMEK",
+ "constraints": {
+ "required": false
+ }
+ },
+ {
+ "name": "cmekKeyVaultUrl",
+ "type": "Microsoft.Common.TextBox",
+ "label": "Key Vault URL",
+ "toolTip": "Specify the CMEK Key Vault URL",
+ "defaultValue": "",
+ "constraints": {
+ "required": "[steps('configuration').cmek.cmekEnable]",
+ "regex": ".*",
+ "validationMessage": "The value must not be empty."
+ },
+ "visible": "[steps('configuration').cmek.cmekEnable]"
+ },
+ {
+ "name": "cmekKeyName",
+ "type": "Microsoft.Common.TextBox",
+ "label": "Key Name",
+ "toolTip": "Specify the key name from your key vault.",
+ "defaultValue": "",
+ "constraints": {
+ "required": "[steps('configuration').cmek.cmekEnable]",
+ "regex": ".*",
+ "validationMessage": "The value must not be empty."
+ },
+ "visible": "[steps('configuration').cmek.cmekEnable]"
+ },
+ {
+ "name": "cmekKeyIdentity",
+ "type": "Microsoft.ManagedIdentity.IdentitySelector",
+ "label": "Managed Identity Configuration for Key Vault Access",
+ "toolTip": {
+ "systemAssignedIdentity": "Enable system assigned identity to grant the managed application access to additional existing resources.",
+ "userAssignedIdentity": "Add user assigned identities to grant the managed application access to additional existing resources."
+ },
+ "defaultValue": {
+ "systemAssignedIdentity": "Off"
+ },
+ "options": {
+ "hideSystemAssignedIdentity": true,
+ "hideUserAssignedIdentity": false,
+ "readOnlySystemAssignedIdentity": true
+ },
+ "visible": "[steps('configuration').cmek.cmekEnable]"
+ }
+ ],
+ "visible": true
+ }
+ ]
+ }
+ ],
+ "outputs": {
+ "location": "[location()]",
+ "managedIdentity": "[steps('managedApplicationSetting').appIdentity]",
+ "cmekConfig": {
+ "kvUrl": "[if(empty(steps('configuration').cmek.cmekKeyVaultUrl), '', steps('configuration').cmek.cmekKeyVaultUrl)]",
+ "keyName": "[if(empty(steps('configuration').cmek.cmekKeyName), '', steps('configuration').cmek.cmekKeyName)]",
+ "identityId": "[if(empty(steps('configuration').cmek.cmekKeyIdentity), '', steps('configuration').cmek.cmekKeyIdentity)]"
+ }
+ }
+ }
+}
+```
+
+### Create template mainTemplate.json
+
+The following Bicep file is the source code for your _mainTemplate.json_. The template uses the user-assigned managed identity defined in the _createUiDefinition.json_ file.
+
+1. Create a new file in Visual Studio Code named _mainTemplate.bicep_.
+1. Copy and paste the following code into the file.
+1. Save the file.
+
+```bicep
+param cmekConfig object = {
+ kvUrl: ''
+ keyName: ''
+ identityId: {}
+}
+@description('Specify the Azure region to place the application definition.')
+param location string = resourceGroup().location
+/////////////////////////////////
+// Common Resources Configuration
+/////////////////////////////////
+var commonproperties = {
+ name: 'cmekdemo'
+ displayName: 'Common Resources'
+ storage: {
+ sku: 'Standard_LRS'
+ kind: 'StorageV2'
+ accessTier: 'Hot'
+ minimumTlsVersion: 'TLS1_2'
+
+ }
+}
+var identity = items(cmekConfig.identityId.userAssignedIdentities)[0].key
+
+resource storage 'Microsoft.Storage/storageAccounts@2022-05-01' = {
+ name: '${commonproperties.name}${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: commonproperties.storage.sku
+ }
+ kind: commonproperties.storage.kind
+ identity: cmekConfig.identityId
+ properties: {
+ accessTier: commonproperties.storage.accessTier
+ minimumTlsVersion: commonproperties.storage.minimumTlsVersion
+ encryption: {
+ identity: {
+ userAssignedIdentity: identity
+ }
+
+ blob: {
+ enabled: true
+ }
+ table: {
+ enabled: true
+ }
+ file: {
+ enabled: true
+ }
+ }
+ keySource: 'Microsoft.Keyvault'
+ keyvaultproperties: {
+ keyname: '${cmekConfig.keyName}'
+ keyvaulturi: '${cmekConfig.kvUrl}'
+ }
+ }
+ }
+}
+```
+
+Use PowerShell or Azure CLI to build the _mainTemplate.json_ file. Go to the directory where you saved your Bicep file and run the `build` command.
+
+# [PowerShell](#tab/azure-powershell)
+
+```powershell
+bicep build mainTemplate.bicep
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az bicep build --file mainTemplate.bicep
+```
+++
+After the Bicep file is converted to JSON, your _mainTemplate.json_ file should match the following example. You might have different values in the `metadata` properties for `version` and `templateHash`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "metadata": {
+ "_generator": {
+ "name": "bicep",
+ "version": "0.16.2.56959",
+ "templateHash": "1234567891234567890"
+ }
+ },
+ "parameters": {
+ "cmekConfig": {
+ "type": "object",
+ "defaultValue": {
+ "kvUrl": "",
+ "keyName": "",
+ "identityId": {}
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specify the Azure region to place the application definition."
+ }
+ }
+ },
+ "variables": {
+ "commonproperties": {
+ "name": "cmekdemo",
+ "displayName": "Common Resources",
+ "storage": {
+ "sku": "Standard_LRS",
+ "kind": "StorageV2",
+ "accessTier": "Hot",
+ "minimumTlsVersion": "TLS1_2"
+ }
+ },
+ "identity": "[items(parameters('cmekConfig').identityId.userAssignedIdentities)[0].key]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-05-01",
+ "name": "[format('{0}{1}', variables('commonproperties').name, uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "[variables('commonproperties').storage.sku]"
+ },
+ "kind": "[variables('commonproperties').storage.kind]",
+ "identity": "[parameters('cmekConfig').identityId]",
+ "properties": {
+ "accessTier": "[variables('commonproperties').storage.accessTier]",
+ "minimumTlsVersion": "[variables('commonproperties').storage.minimumTlsVersion]",
+ "encryption": {
+ "identity": {
+ "userAssignedIdentity": "[variables('identity')]"
+ },
+ "services": {
+ "blob": {
+ "enabled": true
+ },
+ "table": {
+ "enabled": true
+ },
+ "file": {
+ "enabled": true
+ }
+ },
+ "keySource": "Microsoft.Keyvault",
+ "keyvaultproperties": {
+ "keyname": "[format('{0}', parameters('cmekConfig').keyName)]",
+ "keyvaulturi": "[format('{0}', parameters('cmekConfig').kvUrl)]"
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+## Deploy the managed application
+
+After the service catalog definition is created, you can deploy the managed application. For more information, go to [Quickstart: Deploy a service catalog managed application](deploy-service-catalog-quickstart.md).
+
+During the deployment, you use your user-assigned managed identities, key vault name, key vault URL, key vault's key name. The _createUiDefinition.json_ file creates the use interface.
+
+For example, in a portal deployment, on the **Application Settings** tab, you add the _demomanagedappmi_.
++
+On the **Configuration** tab, you enable the customer-managed key and add the user-assigned managed identity for the key vault, _demokeyvaultmi_. You also specify the key vault's URL and the key vault's key name that you created.
++
+## Verify the deployment
+
+After the deployment is complete, you can verify the managed application's identity assignment. The user-assigned managed identity _demomanagedappmi_ is assigned to the managed application.
+
+1. Go to the resource group where you deployed the managed application.
+1. Under **Settings** > **Identity** select **User assigned (preview)**.
+
+You can also verify the storage account that the managed application deployed. The **Encryption** tab shows the key _demo-cmek-key_ and the resource ID for the user-assigned managed identity.
+
+1. Go to the managed resource group where the managed application's storage account is deployed.
+1. Under **Security + networking** select **Encryption**.
+
+## Next steps
+
+- For more information about storage encryption, go to [Customer-managed keys for Azure Storage encryption](../../storage/common/customer-managed-keys-overview.md).
+- For more information about user-assigned managed identity with permissions to access the key in the key vault, go to [Configure customer-managed keys in the same tenant for an existing storage account](../../storage/common/customer-managed-keys-configure-existing-account.md).
azure-resource-manager Publish Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-managed-identity.md
The _createUiDefinition.json_ supports a built-in [managed identity control](./m
{ "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#", "handler": "Microsoft.Azure.CreateUIDef",
- "version": "0.0.1-preview",
+ "version": "0.1.2-preview",
"parameters": { "basics": [], "steps": [
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
Title: Concepts - Run command in Azure VMware Solution (Preview)
-description: Learn about using run commands in Azure VMware Solution.
+ Title: Concepts - Run Command in Azure VMware Solution (Preview)
+description: Learn about using Run Commands in Azure VMware Solution.
Previously updated : 10/25/2022 Last updated : 5/3/2023
-# Run command in Azure VMware Solution
+# Run Command in Azure VMware Solution
-In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
+In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run Command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
Azure VMware Solution supports the following operations:
Azure VMware Solution supports the following operations:
- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) -- [Use HCX Run commands](use-hcx-run-commands.md)
+- [Use VMware HCX Run Commands](use-hcx-run-commands.md)
>[!NOTE]
->Run commands are executed one at a time in the order submitted.
+>Run Commands are executed one at a time in the order submitted.
## View the status of an execution
-You can view the status of any executed run command, including the output, errors, warnings, and information logs of the cmdlets.
+You can view the status of any executed Run Command, including the output, errors, warnings, and information logs of the cmdlets.
1. Sign in to the [Azure portal](https://portal.azure.com).
You can view the status of any executed run command, including the output, error
You can sort by the various columns by selecting the column.
- :::image type="content" source="media/run-command/run-execution-status.png" alt-text="Screenshot showing Run execution status tab." lightbox="media/run-command/run-execution-status.png":::
+ :::image type="content" source="media/run-command/run-execution-status.png" alt-text="Screenshot showing Run Command execution status tab." lightbox="media/run-command/run-execution-status.png":::
1. Select the execution you want to view. A pane opens with details about the execution, and other tabs for the various types of output generated by the cmdlet.
This method attempts to cancel the execution, and then deletes it upon completio
## Next steps
-Now that you've learned about the Run command concepts, you can use the Run command feature to:
+Now that you've learned about the Run Command concepts, you can use the Run Command feature to:
- [Configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned a vSAN storage policy. You can assign a vSAN storage policy in an initial deployment of a VM or when you do other VM operations, such as cloning or migrating. -- [Configure external identity source for vCenter (Run command)](configure-identity-source-vcenter.md) - Configure Active Directory over LDAP or LDAPS for vCenter Server, which enables the use of an external identity source as an Active Directory. Then, you can add groups from the external identity source to the CloudAdmin role.
+- [Configure external identity source for vCenter Server (Run Command)](configure-identity-source-vcenter.md) - Configure Active Directory over LDAP or LDAPS for vCenter Server, which enables the use of an external identity source as an Active Directory. Then, you can add groups from the external identity source to the CloudAdmin role.
-- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
+- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying vSphere datastore can be VMFS, vSAN, vVol, or any supported HCI platform.
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-access-private-cloud.md
Title: Tutorial - Access your private cloud
description: Learn how to access an Azure VMware Solution private cloud Previously updated : 4/11/2023 Last updated : 5/3/2023
In this tutorial, you learn how to:
1. In the Azure portal, select your private cloud, and then **Manage** > **VMware credentials**.
- The URLs and user credentials for private cloud vCenter Server and NSX-T Manager display.
+ The URLs and user credentials for private cloud vCenter Server and NSX-T Manager are displayed.
:::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter Server and NSX Manager URLs and credentials."lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
In this tutorial, you learn how to:
:::image type="content" source="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" alt-text="Screenshot showing a summary of Cluster-1 in the vSphere Client."lightbox="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" border="true":::
-1. In the second tab of the browser, sign in to NSX-T Manager.
+1. In the second tab of the browser, sign in to NSX-T Manager with the 'cloudadmin' user credentials from earlier.
:::image type="content" source="media/tutorial-access-private-cloud/ss9-nsx-manager-login.png" alt-text="Screenshot of the NSX-T Manager sign in page."lightbox="media/tutorial-access-private-cloud/ss9-nsx-manager-login.png" border="true":::
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
description: This article covers troubleshooting Cloud Shell common scenarios. ms.contributor: jahelmic Previously updated : 11/14/2022 Last updated : 05/03/2023 tags: azure-resource-manager Title: Azure Cloud Shell troubleshooting
This article covers troubleshooting Cloud Shell common scenarios.
automatically. You can choose to restore the previous behavior by adding `Connect-AzureAD` to the $PROFILE file in PowerShell.
+ > [!NOTE]
+ > These cmdlets are part of the **AzureAD.Standard.Preview** module. That module is being
+ > deprecated and won't be supported after June 30, 2023. You can use the AD cmdlets in the
+ > **Az.Resources** module or use the Microsoft Graph API instead. The **Az.Resources** module is
+ > installed by default. The **Microsoft Graph API PowerShell SDK** modules aren't installed by
+ > default. For more information, [Upgrade from AzureAD to Microsoft Graph][06].
+ ### Early timeouts in FireFox - **Details**: Cloud Shell uses an open websocket to pass input/output to your browser. FireFox has
Azure Cloud Shell in Azure Government is only accessible through the Azure porta
<!-- link references --> [04]: https://docs.docker.com/machine/overview/ [05]: persisting-shell-storage.md#mount-a-new-clouddrive
+[06]: /powershell/microsoftgraph/migration-steps
cognitive-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md
Follow these steps to install the Speech SDK for Java using Apache Maven:
<dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk-embedded</artifactId>
- <version>1.27.0</version>
+ <version>1.28.0</version>
</dependency> </dependencies> </project>
Be sure to use the `@aar` suffix when the dependency is specified in `build.grad
``` dependencies {
- implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.27.0@aar'
+ implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.28.0@aar'
} ``` ::: zone-end
embeddedSpeechConfig.setSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.
You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation: - [C# (.NET 6.0)](https://aka.ms/embedded-speech-samples-csharp)
+- [C# (.NET MAUI)](https://aka.ms/embedded-speech-samples-csharp-maui)
- [C# for Unity](https://aka.ms/embedded-speech-samples-csharp-unity) ::: zone-end
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-overview.md
Azure Kubernetes Service (AKS) supports adding [Intel SGX confidential computing
- Hardware based, process level container isolation through Intel SGX trusted execution environment (TEE) - Heterogenous node pool clusters (mix confidential and non-confidential node pools)-- Encrypted Page Cache (EPC) memory-based pod scheduling through "Confcon" AKS addon
+- Encrypted Page Cache (EPC) memory-based pod scheduling through "confcom" AKS addon
- Intel SGX DCAP driver pre-installed and kernel dependency installed - CPU consumption based horizontal pod autoscaling and cluster autoscaling - Linux Containers support through Ubuntu 18.04 Gen 2 VM worker nodes ## Confidential Computing add-on for AKS
-The add-on feature enables extra capability on AKS when running confidential computing Intel SGX capable node pools on the cluster. "Confcon" add-on on AKS enables the features below.
+The add-on feature enables extra capability on AKS when running confidential computing Intel SGX capable node pools on the cluster. "confcom" add-on on AKS enables the features below.
#### Azure Device Plugin for Intel SGX <a id="sgx-plugin"></a>
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
This tutorial will show you how to enable Azure Container Apps on your Arc-enabl
Install the following Azure CLI extensions.
-# [bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az extension add --name connectedk8s --upgrade --yes az extension add --name k8s-extension --upgrade --yes az extension add --name customlocation --upgrade --yes
az extension add --source https://aka.ms/acaarccli/containerapp-latest-py2.py3-n
# [PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
az extension add --name connectedk8s --upgrade --yes az extension add --name k8s-extension --upgrade --yes az extension add --name customlocation --upgrade --yes
az extension add --source https://aka.ms/acaarccli/containerapp-latest-py2.py3-n
Register the required namespaces.
-# [bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az provider register --namespace Microsoft.ExtendedLocation --wait az provider register --namespace Microsoft.KubernetesConfiguration --wait az provider register --namespace Microsoft.App --wait
az provider register --namespace Microsoft.OperationalInsights --wait
# [PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
az provider register --namespace Microsoft.ExtendedLocation --wait az provider register --namespace Microsoft.KubernetesConfiguration --wait az provider register --namespace Microsoft.App --wait
az provider register --namespace Microsoft.OperationalInsights --wait
Set environment variables based on your Kubernetes cluster deployment.
-# [bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
```bash GROUP_NAME="my-arc-cluster-group"
LOCATION="eastus"
# [PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$GROUP_NAME="my-arc-cluster-group" $AKS_CLUSTER_GROUP_NAME="my-aks-cluster-group" $AKS_NAME="my-aks-cluster"
The following steps help you get started understanding the service, but for prod
1. Create a cluster in Azure Kubernetes Service.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
az group create --name $AKS_CLUSTER_GROUP_NAME --location $LOCATION az aks create \ --resource-group $AKS_CLUSTER_GROUP_NAME \
The following steps help you get started understanding the service, but for prod
# [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
az group create --name $AKS_CLUSTER_GROUP_NAME --location $LOCATION az aks create ` --resource-group $AKS_CLUSTER_GROUP_NAME `
The following steps help you get started understanding the service, but for prod
1. Get the [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file and test your connection to the cluster. By default, the kubeconfig file is saved to `~/.kube/config`.
- ```azurecli
+ ```azurecli-interactive
az aks get-credentials --resource-group $AKS_CLUSTER_GROUP_NAME --name $AKS_NAME --admin kubectl get ns
The following steps help you get started understanding the service, but for prod
1. Create a resource group to contain your Azure Arc resources.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
az group create --name $GROUP_NAME --location $LOCATION ``` # [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
az group create --name $GROUP_NAME --location $LOCATION ```
The following steps help you get started understanding the service, but for prod
1. Connect the cluster you created to Azure Arc.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
CLUSTER_NAME="${GROUP_NAME}-cluster" # Name of the connected cluster resource az connectedk8s connect --resource-group $GROUP_NAME --name $CLUSTER_NAME
The following steps help you get started understanding the service, but for prod
# [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
$CLUSTER_NAME="${GROUP_NAME}-cluster" # Name of the connected cluster resource az connectedk8s connect --resource-group $GROUP_NAME --name $CLUSTER_NAME
The following steps help you get started understanding the service, but for prod
1. Validate the connection with the following command. It should show the `provisioningState` property as `Succeeded`. If not, run the command again after a minute.
- ```azurecli
+ ```azurecli-interactive
az connectedk8s show --resource-group $GROUP_NAME --name $CLUSTER_NAME ```
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
1. Create a Log Analytics workspace.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
WORKSPACE_NAME="$GROUP_NAME-workspace" # Name of the Log Analytics workspace az monitor log-analytics workspace create \
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
# [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
$WORKSPACE_NAME="$GROUP_NAME-workspace" az monitor log-analytics workspace create `
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
1. Run the following commands to get the encoded workspace ID and shared key for an existing Log Analytics workspace. You need them in the next step.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
LOG_ANALYTICS_WORKSPACE_ID=$(az monitor log-analytics workspace show \ --resource-group $GROUP_NAME \ --workspace-name $WORKSPACE_NAME \
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
# [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
$LOG_ANALYTICS_WORKSPACE_ID=$(az monitor log-analytics workspace show ` --resource-group $GROUP_NAME ` --workspace-name $WORKSPACE_NAME `
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
1. Set the following environment variables to the desired name of the [Container Apps extension](azure-arc-create-container-app.md), the cluster namespace in which resources should be provisioned, and the name for the Azure Container Apps connected environment. Choose a unique name for `<connected-environment-name>`. The connected environment name will be part of the domain name for app you'll create in the Azure Container Apps connected environment.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
```bash EXTENSION_NAME="appenv-ext"
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
# [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
$EXTENSION_NAME="appenv-ext" $NAMESPACE="appplat-ns" $CONNECTED_ENVIRONMENT_NAME="<connected-environment-name>"
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
1. Install the Container Apps extension to your Azure Arc-connected cluster with Log Analytics enabled. Log Analytics can't be added to the extension later.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
az k8s-extension create \ --resource-group $GROUP_NAME \ --name $EXTENSION_NAME \
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
# [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
az k8s-extension create ` --resource-group $GROUP_NAME ` --name $EXTENSION_NAME `
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
1. Save the `id` property of the Container Apps extension for later.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
EXTENSION_ID=$(az k8s-extension show \ --cluster-type connectedClusters \ --cluster-name $CLUSTER_NAME \
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
# [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
$EXTENSION_ID=$(az k8s-extension show ` --cluster-type connectedClusters ` --cluster-name $CLUSTER_NAME `
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
1. Wait for the extension to fully install before proceeding. You can have your terminal session wait until it completes by running the following command:
- ```azurecli
+ ```azurecli-interactive
az resource wait --ids $EXTENSION_ID --custom "properties.provisioningState!='Pending'" --api-version "2020-07-01-preview" ```
The [custom location](../azure-arc/kubernetes/custom-locations.md) is an Azure l
1. Set the following environment variables to the desired name of the custom location and for the ID of the Azure Arc-connected cluster.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```bash
+ ```azurecli-interactive
CUSTOM_LOCATION_NAME="my-custom-location" # Name of the custom location CONNECTED_CLUSTER_ID=$(az connectedk8s show --resource-group $GROUP_NAME --name $CLUSTER_NAME --query id --output tsv) ``` # [PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
$CUSTOM_LOCATION_NAME="my-custom-location" # Name of the custom location $CONNECTED_CLUSTER_ID=$(az connectedk8s show --resource-group $GROUP_NAME --name $CLUSTER_NAME --query id --output tsv) ```
The [custom location](../azure-arc/kubernetes/custom-locations.md) is an Azure l
1. Create the custom location:
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
az customlocation create \ --resource-group $GROUP_NAME \ --name $CUSTOM_LOCATION_NAME \
The [custom location](../azure-arc/kubernetes/custom-locations.md) is an Azure l
1. Validate that the custom location is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, rerun the command after a minute.
- ```azurecli
+ ```azurecli-interactive
az customlocation show --resource-group $GROUP_NAME --name $CUSTOM_LOCATION_NAME ``` 1. Save the custom location ID for the next step.
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
CUSTOM_LOCATION_ID=$(az customlocation show \ --resource-group $GROUP_NAME \ --name $CUSTOM_LOCATION_NAME \
The [custom location](../azure-arc/kubernetes/custom-locations.md) is an Azure l
# [PowerShell](#tab/azure-powershell)
- ```azurecli
+ ```azurecli-interactive
$CUSTOM_LOCATION_ID=$(az customlocation show ` --resource-group $GROUP_NAME ` --name $CUSTOM_LOCATION_NAME `
Before you can start creating apps in the custom location, you need an [Azure Co
1. Create the Container Apps connected environment:
- # [bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
az containerapp connected-env create \ --resource-group $GROUP_NAME \ --name $CONNECTED_ENVIRONMENT_NAME \
Before you can start creating apps in the custom location, you need an [Azure Co
# [PowerShell](#tab/azure-powershell)
- ```azurecli
+ ```azurecli-interactive
az containerapp connected-env create ` --resource-group $GROUP_NAME ` --name $CONNECTED_ENVIRONMENT_NAME `
Before you can start creating apps in the custom location, you need an [Azure Co
1. Validate that the Container Apps connected environment is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, run it again after a minute.
- ```azurecli
+ ```azurecli-interactive
az containerapp connected-env show --resource-group $GROUP_NAME --name $CONNECTED_ENVIRONMENT_NAME ```
container-apps Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-pipelines.md
Last updated 11/09/2022
-# Deploy to Azure Container Apps from Azure Pipelines (preview)
+# Deploy to Azure Container Apps from Azure Pipelines
Azure Container Apps allows you to use Azure Pipelines to publish [revisions](revisions.md) to your container app. As commits are pushed to your [Azure DevOps repository](/azure/devops/repos/), a pipeline is triggered which updates the container image in the container registry. Azure Container Apps creates a new revision based on the updated container image.
The pipeline is triggered by commits to a specific branch in your repository. Wh
## Container Apps Azure Pipelines task
-To build and deploy your container app, add the [`AzureContainerAppsRC`](https://marketplace.visualstudio.com/items?itemName=microsoft-oryx.AzureContainerAppsRC) (preview) Azure Pipelines task to your pipeline.
- The task supports the following scenarios: * Build from a Dockerfile and deploy to Container Apps * Build from source code without a Dockerfile and deploy to Container Apps. Supported languages include .NET, Node.js, PHP, Python, and Ruby * Deploy an existing container image to Container Apps
+With the production release this task comes with Azure DevOps and no longer requires explicit installation. For the complete documentation please see [AzureContainerApps@1 - Azure Container Apps Deploy v1 task](https://review.learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/azure-container-apps-v1).
+ ### Usage examples Here are some common scenarios for using the task. For more information, see the [task's documentation](https://github.com/Azure/container-apps-deploy-pipelines-task/blob/main/README.md).
The following snippet shows how to build a container image from source code and
```yaml steps:-- task: AzureContainerAppsRC@0
+- task: AzureContainerApps@1
inputs: appSourcePath: '$(Build.SourcesDirectory)/src' azureSubscription: 'my-subscription-service-connection'
The task uses the Dockerfile in `appSourcePath` to build the container image. If
#### Deploy an existing container image to Container Apps
-The following snippet shows how to deploy an existing container image to Container Apps.
+The following snippet shows how to deploy an existing container image to Container Apps. Note, that we're deploying a publicly available image and won't need any registry authentication as a result.
```yaml steps:
- - task: AzureContainerAppsRC@0
+ - task: AzureContainerApps@1
inputs: azureSubscription: 'my-subscription-service-connection'
- acrName: 'myregistry'
+ imageToDeploy : 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
containerAppName: 'my-container-app' resourceGroup: 'my-container-app-rg' imageToDeploy: 'myregistry.azurecr.io/my-container-app:$(Build.BuildId)'
After your app is created, you can add a managed identity to your app and assign
[!INCLUDE [container-apps-github-devops-setup.md](../../includes/container-apps-github-devops-setup.md)]
-### Install the Azure Container Apps task
-
-The Azure Container Apps Azure Pipelines task is currently in preview. Before you use the task, you must install it from the Azure DevOps Marketplace.
-
-1. Open the [Azure Container Apps task](https://marketplace.visualstudio.com/items?itemName=microsoft-oryx.AzureContainerAppsRC) in the Azure DevOps Marketplace.
-
-1. Select **Get it free**.
-
-1. Select your Azure DevOps organization and select **Install**.
### Create an Azure DevOps service connection
To learn more about service connections, see [Connect to Microsoft Azure](/azure
vmImage: ubuntu-latest steps:
- - task: AzureContainerAppsRC@0
+ - task: AzureContainerApps@1
inputs: appSourcePath: '$(Build.SourcesDirectory)/src' azureSubscription: '<AZURE_SUBSCRIPTION_SERVICE_CONNECTION>'
container-apps Dapr Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-github-actions.md
The [sample solution](https://github.com/Azure-Samples/container-apps-store-api-
In the console, set the following environment variables:
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
Replace \<PLACEHOLDERS\> with your values.
SUBSCRIPTION_ID="<YOUR_SUBSCRIPTION_ID>"
Replace \<Placeholders\> with your values.
-```powershell
+```azurepowershell-interactive
$ResourceGroup="my-containerapp-store" $Location="canadacentral" $GitHubUsername="<GitHubUsername>"
$SubscriptionId="<SubscriptionId>"
Sign in to Azure from the CLI using the following command, and follow the prompts in your browser to complete the authentication process.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az login ``` # [PowerShell](#tab/powershell)
-```azurepowershell
+```azurepowershell-interactive
Connect-AzAccount ```
Connect-AzAccount
Ensure you're running the latest version of the CLI via the upgrade command.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az upgrade ``` # [PowerShell](#tab/powershell)
-```azurepowershell
+```azurepowershell-interactive
Install-Module -Name Az.App ```
Now that you've validated your Azure CLI setup, bring the application code to yo
1. Use the following [git](https://git-scm.com/downloads) command with your GitHub username to clone **your fork** of the repo to your development environment:
- # [Bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
```git git clone https://github.com/$GITHUB_USERNAME/container-apps-store-api-microservice.git
Now that you've validated your Azure CLI setup, bring the application code to yo
1. Navigate into the cloned directory.
- ```console
+ ```bash
cd container-apps-store-api-microservice ```
The following resources are deployed via the bicep template in the `/deploy` pat
The workflow requires a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) to authenticate to Azure. In the console, run the following command and replace `<SERVICE_PRINCIPAL_NAME>` with your own unique value.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az ad sp create-for-rbac \ --name <SERVICE_PRINCIPAL_NAME> \ --role "contributor" \
az ad sp create-for-rbac \
# [PowerShell](#tab/powershell)
-```azurepowershell
+```azurepowershell-interactive
$CmdArgs = @{ DisplayName = '<SERVICE_PRINCIPAL_NAME>' Role = 'contributor'
To demonstrate the inner-loop experience for creating revisions via GitHub actio
1. Return to the console, and navigate into the *node-service/views* directory in the forked repository.
- # [Bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```bash
cd node-service/views ```
To demonstrate the inner-loop experience for creating revisions via GitHub actio
1. Open the *index.jade* file in your editor of choice.
- # [Bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```bash
code index.jade . ```
To demonstrate the inner-loop experience for creating revisions via GitHub actio
1. Stage the changes and push to the `main` branch of your fork using git.
- # [Bash](#tab/bash)
+ # [Azure CLI](#tab/azure-cli)
```git git add .
To demonstrate the inner-loop experience for creating revisions via GitHub actio
Once you've finished the tutorial, run the following command to delete your resource group, along with all the resources you created in this tutorial.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az group delete \ --resource-group $RESOURCE_GROUP ``` # [PowerShell](#tab/powershell)
-```azurepowershell
+```azurepowershell-interactive
Remove-AzResourceGroup -Name $ResourceGroupName -Force ```
Remove-AzResourceGroup -Name $ResourceGroupName -Force
## Next steps
-Learn more about how [Dapr integrates with Azure Container Apps](./dapr-overview.md).
+Learn more about how [Dapr integrates with Azure Container Apps](./dapr-overview.md).
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
In this tutorial, you'll deploy a containerized application to Azure Container A
1. Begin by cloning the [sample repository](https://github.com/azure-samples/containerapps-albumapi-javascript) to your machine using the following command.
- ```bash
+ ```git
git clone https://github.com/Azure-Samples/containerapps-albumapi-javascript.git ```
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
When using these commands, replace the `<PLACEHOLDERS>` with your values.
>[!NOTE] > The subnet associated with a Container App Environment requires a CIDR prefix of `/23` or larger.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az network vnet create \ --resource-group <RESOURCE_GROUP_NAME> \ --name <VNET_NAME> \
az network vnet create \
--address-prefix 10.0.0.0/16 ```
-```azurecli
+```azurecli-interactive
az network vnet subnet create \ --resource-group <RESOURCE_GROUP_NAME> \ --vnet-name <VNET_NAME> \
az network vnet subnet create \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$SubnetArgs = @{ Name = 'infrastructure-subnet' AddressPrefix = '10.0.0.0/21'
$SubnetArgs = @{
$subnet = New-AzVirtualNetworkSubnetConfig @SubnetArgs ```
-```azurepowershell
+```azurepowershell-interactive
$VnetArgs = @{ Name = <VNetName> Location = <Location>
$vnet = New-AzVirtualNetwork @VnetArgs
Next, query for the infrastructure subnet ID.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```bash
+```azurecli-interactive
INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VNET_NAME> --name infrastructure --query "id" -o tsv | tr -d '[:space:]'` ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$InfrastructureSubnet=(Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet).Id ```
$InfrastructureSubnet=(Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -
Finally, create the environment with the `--zone-redundant` parameter. The location must be the same location used when creating the VNET.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az containerapp env create \ --name <CONTAINER_APP_ENV_NAME> \ --resource-group <RESOURCE_GROUP_NAME> \
az containerapp env create \
A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
-```azurepowershell
+```azurepowershell-interactive
$WorkspaceArgs = @{ Name = 'myworkspace' ResourceGroupName = <ResourceGroupName>
$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGrou
To create the environment, run the following command:
-```azurepowershell
+```azurepowershell-interactive
$EnvArgs = @{ EnvName = <EnvironmentName> ResourceGroupName = <ResourceGroupName>
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
This article demonstrates how to deploy an existing container to Azure Container
To create the environment, run the following command:
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
az containerapp env create \
A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
-```azurepowershell
+```azurepowershell-interactive
$WorkspaceArgs = @{ Name = 'myworkspace' ResourceGroupName = $ResourceGroupName
$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGrou
To create the environment, run the following command:
-```azurepowershell
+```azurepowershell-interactive
$EnvArgs = @{ EnvName = $ContainerAppsEnvironment ResourceGroupName = $ResourceGroupName
The example shown in this article demonstrates how to use a custom container ima
::: zone pivot="container-apps-private-registry"
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
For details on how to provide values for any of these parameters to the `create` command, run `az containerapp create --help` or [visit the online reference](/cli/azure/containerapp#az-containerapp-create). To generate credentials for an Azure Container Registry, use [az acr credential show](/cli/azure/acr/credential#az-acr-credential-show).
REGISTRY_PASSWORD=<REGISTRY_PASSWORD>
(Replace the \<placeholders\> with your values.)
-```azurecli
+```azurecli-interactive
az containerapp create \ --name my-container-app \ --resource-group $RESOURCE_GROUP \
az containerapp create \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$ContainerImageName = "<CONTAINER_IMAGE_NAME>" $RegistryServer = "<REGISTRY_SERVER>" $RegistryUsername = "<REGISTRY_USERNAME>"
$RegistryPassword = "<REGISTRY_PASSWORD>"
(Replace the \<placeholders\> with your values.)
-```azurepowershell
+```azurepowershell-interactive
$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName $ResourceGroupName -EnvName $ContainerAppsEnvironment).Id $TemplateObj = New-AzContainerAppTemplateObject -Name my-container-app -Image $ContainerImageName
New-AzContainerApp @ContainerAppArgs
::: zone pivot="container-apps-public-registry"
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az containerapp create \ --image <REGISTRY_CONTAINER_NAME> \ --name my-container-app \
If you have enabled ingress on your container app, you can add `--query properti
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$TemplateObj = New-AzContainerAppTemplateObject -Name my-container-app -Image "<REGISTRY_CONTAINER_NAME>" ``` (Replace the \<REGISTRY_CONTAINER_NAME\> with your value.)
-```azurepowershell
+```azurepowershell-interactive
$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName $ResourceGroupName -EnvName $ContainerAppsEnvironment).Id $ContainerAppArgs = @{
To verify a successful deployment, you can query the Log Analytics workspace. Yo
Use the following commands to view console log messages.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv` az monitor log-analytics query \
az monitor log-analytics query \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated" $queryResults.Results ```
If you're not going to continue to use this application, run the following comma
>[!CAUTION] > The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az group delete --name $RESOURCE_GROUP ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
Remove-AzResourceGroup -Name $ResourceGroupName -Force ```
container-apps Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions.md
Last updated 11/09/2022
-# Deploy to Azure Container Apps with GitHub Actions (preview)
+# Deploy to Azure Container Apps with GitHub Actions
Azure Container Apps allows you to use GitHub Actions to publish [revisions](revisions.md) to your container app. As commits are pushed to your GitHub repository, a workflow is triggered which updates the container image in the container registry. Azure Container Apps creates a new revision based on the updated container image.
steps:
creds: ${{ secrets.AZURE_CREDENTIALS }} - name: Build and deploy Container App
- uses: azure/container-apps-deploy-action@v0
+ uses: azure/container-apps-deploy-action@v1
with: appSourcePath: ${{ github.workspace }}/src acrName: myregistry
steps:
creds: ${{ secrets.AZURE_CREDENTIALS }} - name: Build and deploy Container App
- uses: azure/container-apps-deploy-action@v0
+ uses: azure/container-apps-deploy-action@v1
with: acrName: myregistry containerAppName: my-container-app
Before creating a workflow, the source code for your app must be in a GitHub rep
1. Log in to Azure with the Azure CLI.
- ```azurecli
+ ```azurecli-interactive
az login ``` 1. Next, install the latest Azure Container Apps extension for the CLI.
- ```azurecli
+ ```azurecli-interactive
az extension add --name containerapp --upgrade ```
Before creating a workflow, the source code for your app must be in a GitHub rep
1. Clone the repository to your local machine.
- ```bash
+ ```git
git clone https://github.com/<YOUR_GITHUB_ACCOUNT_NAME>/my-container-app.git ```
The GitHub workflow requires a secret named `AZURE_CREDENTIALS` to authenticate
1. Create a service principal with the *Contributor* role on the resource group that contains the container app and container registry.
- ```azurecli
+ ```azurecli-interactive
az ad sp create-for-rbac \ --name my-app-credentials \ --role contributor \
The GitHub workflow requires a secret named `AZURE_CREDENTIALS` to authenticate
creds: ${{ secrets.AZURE_CREDENTIALS }} - name: Build and deploy Container App
- uses: azure/container-apps-deploy-action@v0
+ uses: azure/container-apps-deploy-action@v1
with: appSourcePath: ${{ github.workspace }}/src acrName: <ACR_NAME>
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
When you create a container app, secrets are defined using the `--secrets` param
- The parameter accepts a space-delimited set of name/value pairs. - Each pair is delimited by an equals sign (`=`).
-```bash
+```azurecli-interactive
az containerapp create \ --resource-group "my-resource-group" \ --name queuereader \
Here, a connection string to a queue storage account is declared in the `--secre
When you create a container app, secrets are defined as one or more Secret objects that are passed through the `ConfigurationSecrets` parameter.
-```azurepowershell
+```azurepowershell-interactive
$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName my-resource-group -EnvName my-environment-name).Id $TemplateObj = New-AzContainerAppTemplateObject -Name queuereader -Image demos/queuereader:v1 $SecretObj = New-AzContainerAppSecretObject -Name queue-connection-string -Value $QueueConnectionString
When you create a container app, secrets are defined using the `--secrets` param
- Each pair is delimited by an equals sign (`=`). - To specify a Key Vault reference, use the format `<SECRET_NAME>=keyvaultref:<KEY_VAULT_SECRET_URI>,identityref:<MANAGED_IDENTITY_ID>`. For example, `queue-connection-string=keyvaultref:https://mykeyvault.vault.azure.net/secrets/queuereader,identityref:/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/my-resource-group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/my-identity`.
-```bash
+```azurecli-interactive
az containerapp create \ --resource-group "my-resource-group" \ --name queuereader \
To avoid committing secret values to source control with your ARM template, pass
In this example, you create a container app using the Azure CLI with a secret that's referenced in an environment variable. To reference a secret in an environment variable in the Azure CLI, set its value to `secretref:`, followed by the name of the secret.
-```bash
+```azurecli-interactive
az containerapp create \ --resource-group "my-resource-group" \ --name myQueueApp \
Here, the environment variable named `connection-string` gets its value from the
In this example, you create a container using Azure PowerShell with a secret that's referenced in an environment variable. To reference the secret in an environment variable in PowerShell, set its value to `secretref:`, followed by the name of the secret.
-```azurecli
+```azurepowershell-interactive
$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName my-resource-group -EnvName my-environment-name).Id $SecretObj = New-AzContainerAppSecretObject -Name queue-connection-string -Value $QueueConnectionString
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
The following architecture diagram illustrates the components that make up this
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
Individual container apps are deployed to an Azure Container Apps environment. To create the environment, run the following command:
-```azurecli
+```azurecli-interactive
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
az containerapp env create \
Individual container apps are deployed to an Azure Container Apps environment. A Log Analytics workspace is deployed as the logging backend before the environment is deployed. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
-```azurepowershell
+```azurepowershell-interactive
$WorkspaceArgs = @{ Name = 'myworkspace' ResourceGroupName = $ResourceGroupName
$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGrou
To create the environment, run the following command:
-```azurepowershell
+```azurepowershell-interactive
$EnvArgs = @{ EnvName = $ContainerAppsEnvironment ResourceGroupName = $ResourceGroupName
New-AzContainerAppManagedEnv @EnvArgs
With the environment deployed, the next step is to deploy an Azure Blob Storage account that is used by one of the microservices to store data. Before deploying the service, you need to choose a name for the storage account. Storage account names must be _unique within Azure_, from 3 to 24 characters in length and must contain numbers and lowercase letters only.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```bash
+```azurecli-interactive
STORAGE_ACCOUNT_NAME="<storage account name>" ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$StorageAcctName = '<storage account name>' ```
$StorageAcctName = '<storage account name>'
Use the following command to create the Azure Storage account.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az storage account create \ --name $STORAGE_ACCOUNT_NAME \ --resource-group $RESOURCE_GROUP \
az storage account create \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
Install-Module Az.Storage $StorageAcctArgs = @{
While Container Apps supports both user-assigned and system-assigned managed ide
1. Create a user-assigned identity.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az identity create --resource-group $RESOURCE_GROUP --name "nodeAppIdentity" --output json ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
Install-Module -Name AZ.ManagedServiceIdentity New-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppIdentity' -Location $Location
New-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppI
Retrieve the `principalId` and `id` properties and store in variables.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
PRINCIPAL_ID=$(az identity show -n "nodeAppIdentity" --resource-group $RESOURCE_GROUP --query principalId | tr -d \") IDENTITY_ID=$(az identity show -n "nodeAppIdentity" --resource-group $RESOURCE_GROUP --query id | tr -d \") CLIENT_ID=$(az identity show -n "nodeAppIdentity" --resource-group $RESOURCE_GROUP --query clientId | tr -d \")
CLIENT_ID=$(az identity show -n "nodeAppIdentity" --resource-group $RESOURCE_GRO
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$PrincipalId = (Get-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppIdentity').PrincipalId $IdentityId = (Get-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppIdentity').Id $ClientId = (Get-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -Name 'nodeAppIdentity').ClientId
$ClientId = (Get-AzUserAssignedIdentity -ResourceGroupName $ResourceGroupName -N
Retrieve the subscription ID for your current subscription.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
SUBSCRIPTION_ID=$(az account show --query id --output tsv) ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$SubscriptionId=$(Get-AzContext).Subscription.id ```
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az role assignment create --assignee $PRINCIPAL_ID \ --role "Storage Blob Data Contributor" \ --scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT_NAME"
az role assignment create --assignee $PRINCIPAL_ID \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
Install-Module Az.Resources New-AzRoleAssignment -ObjectId $PrincipalId -RoleDefinitionName 'Storage Blob Data Contributor' -Scope "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Storage/storageAccounts/$StorageAcctName"
New-AzRoleAssignment -ObjectId $PrincipalId -RoleDefinitionName 'Storage Blob Da
There are multiple ways to authenticate to external resources via Dapr. This example doesn't use the Dapr Secrets API at runtime, but uses an Azure-based state store. Therefore, you can forgo creating a secret store component and instead provide direct access from the node app to the blob store using Managed Identity. If you want to use a non-Azure state store or the Dapr Secrets API at runtime, you could create a secret store component. This component would load runtime secrets so you can reference them at runtime.
-# [Bash](#tab/bash)
- Open a text editor and create a config file named *statestore.yaml* with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. The following example shows how your *statestore.yaml* file should look when configured for your Azure Blob Storage account: ```yaml
To use this file, update the placeholders:
- Replace `<STORAGE_ACCOUNT_NAME>` with the value of the `STORAGE_ACCOUNT_NAME` variable you defined. To obtain its value, run the following command:
-```azurecli
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
echo $STORAGE_ACCOUNT_NAME ```- - Replace `<MANAGED_IDENTITY_CLIENT_ID>` with the value of the `CLIENT_ID` variable you defined. To obtain its value, run the following command:
-```azurecli
+```azurecli-interactive
echo $CLIENT_ID ``` Navigate to the directory in which you stored the component yaml file and run the following command to configure the Dapr component in the Container Apps environment. For more information about configuring Dapr components, see [Configure Dapr components](dapr-overview.md).
-```azurecli
+```azurecli-interactive
az containerapp env dapr-component set \ --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP \ --dapr-component-name statestore \
az containerapp env dapr-component set \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$AcctName = New-AzContainerAppDaprMetadataObject -Name "accountName" -Value $StorageAcctName
New-AzContainerAppManagedEnvDapr @DaprArgs
## Deploy the service application (HTTP web server)
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az containerapp create \ --name nodeapp \ --resource-group $RESOURCE_GROUP \
az containerapp create \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName $ResourceGroupName -EnvName $ContainerAppsEnvironment).Id $EnvVars = New-AzContainerAppEnvironmentVarObject -Name APP_PORT -Value 3000
By default, the image is pulled from [Docker Hub](https://hub.docker.com/r/dapri
Run the following command to deploy the client container app.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az containerapp create \ --name pythonapp \ --resource-group $RESOURCE_GROUP \
az containerapp create \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$TemplateArgs = @{ Name = 'pythonapp'
New-AzContainerApp @ClientArgs
-## Verify the result
+## Verify the results
### Confirm successful state persistence
Logs from container apps are stored in the `ContainerAppConsoleLogs_CL` custom t
Use the following CLI command to view logs using the command line.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv` az monitor log-analytics query \
az monitor log-analytics query \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5 " $queryResults.Results
Congratulations! You've completed this tutorial. If you'd like to delete the res
> [!CAUTION] > This command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az group delete --resource-group $RESOURCE_GROUP ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
Remove-AzResourceGroup -Name $ResourceGroupName -Force ```
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
Define an HTTP scale rule using the `--scale-rule-http-concurrency` parameter in
|||||| | `--scale-rule-http-concurrency`| When the number of concurrent HTTP requests exceeds this value, then another replica is added. Replicas continue to add to the pool up to the `max-replicas` amount. | 10 | 1 | n/a |
-```bash
+```azurecli-interactive
az containerapp create \ --name <CONTAINER_APP_NAME> \ --resource-group <RESOURCE_GROUP> \
Define a TCP scale rule using the `--scale-rule-tcp-concurrency` parameter in th
|||||| | `--scale-rule-tcp-concurrency`| When the number of concurrent TCP connections exceeds this value, then another replica is added. Replicas will continue to be added up to the `max-replicas` amount as the number of concurrent connections increase. | 10 | 1 | n/a |
-```bash
+```azurecli-interactive
az containerapp create \ --name <CONTAINER_APP_NAME> \ --resource-group <RESOURCE_GROUP> \
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
Create a resource group with the [az group create](/cli/azure/group#az-group-cre
The following example creates a resource group named `myResourceGroup` in the East US Azure region.
-```azurecli
+```azurecli-interactive
az group create --name myResourceGroup --location eastus ``` Create an Azure container registry instance using the [az acr create](/cli/azure/acr#az-acr-create) command. The registry name must be unique within Azure, contain 5-50 alphanumeric characters. All letters must be specified in lower case. In the following example, `mycontainerregistry007` is used. Update this to a unique value.
-```azurecli
+```azurecli-interactive
az acr create \ --resource-group myResourceGroup \ --name mycontainerregistry007 \
This tutorial uses a sample Fruits list app with a web UI that calls a Quarkus R
Run the following commands in your terminal to clone the sample repo and set up the sample app environment.
-```bash
+```git
git clone https://github.com/quarkusio/quarkus-quickstarts cd quarkus-quickstarts/hibernate-orm-panache-quickstart ```
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
Before pushing container images, you must log in to the registry. To do so, use the [az acr login][az-acr-login] command. Specify only the registry resource name when signing in with the Azure CLI. Don't use the fully qualified login server name.
- ```azurecli
+ ```azurecli-interactive
az acr login --name <registry-name> ```
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
1. Create a Container Apps instance by running the following command. Make sure you replace the value of the environment variables with the actual name and location you want to use.
- ```azurecli
+ ```azurecli-interactive
RESOURCE_GROUP="myResourceGroup" LOCATION="eastus" CONTAINERAPPS_ENVIRONMENT="my-environment"
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
1. Create a container app with your app image by running the following command. Replace the placeholders with your values. To find the container registry admin account details, see [Authenticate with an Azure container registry](../container-registry/container-registry-authentication.md)
- ```azurecli
+ ```azurecli-interactive
CONTAINER_IMAGE_NAME=quarkus-postgres-passwordless-app:v1 REGISTRY_SERVER=mycontainerregistry007 REGISTRY_USERNAME=<REGISTRY_USERNAME>
Next, create a PostgreSQL Database and configure your container app to connect t
### [Flexible Server](#tab/flexible)
- ```azurecli
+ ```azurecli-interactive
DB_SERVER_NAME='msdocs-quarkus-postgres-webapp-db' ADMIN_USERNAME='demoadmin' ADMIN_PASSWORD='<admin-password>'
Next, create a PostgreSQL Database and configure your container app to connect t
### [Single Server](#tab/single)
- ```azurecli
+ ```azurecli-interactive
DB_SERVER_NAME='msdocs-quarkus-postgres-webapp-db' ADMIN_USERNAME='demoadmin' ADMIN_PASSWORD='<admin-password>'
Next, create a PostgreSQL Database and configure your container app to connect t
### [Flexible Server](#tab/flexible)
- ```azurecli
+ ```azurecli-interactive
az postgres flexible-server db create \ --resource-group $RESOURCE_GROUP \ --server-name $DB_SERVER_NAME \
Next, create a PostgreSQL Database and configure your container app to connect t
### [Single Server](#tab/single)
- ```azurecli
+ ```azurecli-interactive
az postgres db create \ --resource-group $RESOURCE_GROUP \ --server-name $DB_SERVER_NAME \
Next, create a PostgreSQL Database and configure your container app to connect t
1. Install the [Service Connector](../service-connector/overview.md) passwordless extension for the Azure CLI:
- ```azurecli
+ ```azurecli-interactive
az extension add --name serviceconnector-passwordless --upgrade ```
Next, create a PostgreSQL Database and configure your container app to connect t
### [Flexible Server](#tab/flexible)
- ```azurecli
+ ```azurecli-interactive
az containerapp connection create postgres-flexible \ --resource-group $RESOURCE_GROUP \ --name my-container-app \
Next, create a PostgreSQL Database and configure your container app to connect t
### [Single Server](#tab/single)
- ```azurecli
+ ```azurecli-interactive
az containerapp connection create postgres \ --resource-group $RESOURCE_GROUP \ --name my-container-app \
Next, create a PostgreSQL Database and configure your container app to connect t
You can find the application URL(FQDN) by using the following command:
-```azurecli
+```azurecli-interactive
az containerapp list --resource-group $RESOURCE_GROUP ```
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
The following example shows you how to create a Container Apps environment in an
Next, declare a variable to hold the VNET name.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```bash
+```azurecli-interactive
VNET_NAME="my-custom-vnet" ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$VnetName = 'my-custom-vnet' ```
Now create an instance of the virtual network to associate with the Container Ap
> [!NOTE] > Network subnet address prefix requires a minimum CIDR range of `/23` for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az network vnet create \ --resource-group $RESOURCE_GROUP \ --name $VNET_NAME \
az network vnet create \
--address-prefix 10.0.0.0/16 ```
-```azurecli
+```azurecli-interactive
az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \
az network vnet subnet create \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$SubnetArgs = @{ Name = 'infrastructure-subnet' AddressPrefix = '10.0.0.0/23'
$SubnetArgs = @{
$subnet = New-AzVirtualNetworkSubnetConfig @SubnetArgs ```
-```azurepowershell
+```azurepowershell-interactive
$VnetArgs = @{ Name = $VnetName Location = $Location
$vnet = New-AzVirtualNetwork @VnetArgs
With the VNET established, you can now query for the infrastructure subnet ID.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```bash
+```azurecli-interactive
INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'` ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$InfrastructureSubnet = (Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet).Id ```
$InfrastructureSubnet = (Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name
Finally, create the Container Apps environment with the VNET and subnet.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
With your environment created using your custom virtual network, you can deploy
A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
-```azurepowershell
+```azurepowershell-interactive
$WorkspaceArgs = @{ Name = 'myworkspace' ResourceGroupName = $ResourceGroupName
$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGrou
To create the environment, run the following command:
-```azurepowershell
+```azurepowershell-interactive
$EnvArgs = @{ EnvName = $ContainerAppsEnvironment ResourceGroupName = $ResourceGroupName
If you want to deploy your container app with a private DNS, run the following c
First, extract identifiable information from the environment.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```bash
+```azurecli-interactive
ENVIRONMENT_DEFAULT_DOMAIN=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query properties.defaultDomain --out json | tr -d '"'` ```
-```bash
+```azurecli-interactive
ENVIRONMENT_STATIC_IP=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query properties.staticIp --out json | tr -d '"'` ```
-```bash
+```azurecli-interactive
VNET_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query id --out json | tr -d '"'` ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$EnvironmentDefaultDomain = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).DefaultDomain ```
-```azurepowershell
+```azurepowershell-interactive
$EnvironmentStaticIp = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).StaticIp ```
$EnvironmentStaticIp = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvi
Next, set up the private DNS.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az network private-dns zone create \ --resource-group $RESOURCE_GROUP \ --name $ENVIRONMENT_DEFAULT_DOMAIN ```
-```azurecli
+```azurecli-interactive
az network private-dns link vnet create \ --resource-group $RESOURCE_GROUP \ --name $VNET_NAME \
az network private-dns link vnet create \
--zone-name $ENVIRONMENT_DEFAULT_DOMAIN -e true ```
-```azurecli
+```azurecli-interactive
az network private-dns record-set a add-record \ --resource-group $RESOURCE_GROUP \ --record-set-name "*" \
There are three optional networking parameters you can choose to define when cal
You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the values are generated for you.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
| Parameter | Description | |||
If you're not going to continue to use this application, you can delete the Azur
>[!CAUTION] > The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this guide exist in the specified resource group, they will also be deleted.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az group delete --name $RESOURCE_GROUP ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
Remove-AzResourceGroup -Name $ResourceGroupName -Force ```
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
Register the `Microsoft.ContainerService` provider.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```bash
+```azurecli-interactive
az provider register --namespace Microsoft.ContainerService ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService ```
Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
Declare a variable to hold the VNET name.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
```bash VNET_NAME="my-custom-vnet"
VNET_NAME="my-custom-vnet"
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$VnetName = 'my-custom-vnet' ```
Now create an Azure virtual network to associate with the Container Apps environ
> [!NOTE] > Network subnet address prefix requires a minimum CIDR range of `/23` for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az network vnet create \ --resource-group $RESOURCE_GROUP \ --name $VNET_NAME \
az network vnet create \
--address-prefix 10.0.0.0/16 ```
-```azurecli
+```azurecli-interactive
az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \
az network vnet subnet create \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$SubnetArgs = @{ Name = 'infrastructure-subnet' AddressPrefix = '10.0.0.0/21'
$SubnetArgs = @{
$subnet = New-AzVirtualNetworkSubnetConfig @SubnetArgs ```
-```azurepowershell
+```azurepowershell-interactive
$VnetArgs = @{ Name = $VnetName Location = $Location
$vnet = New-AzVirtualNetwork @VnetArgs
With the virtual network created, you can retrieve the ID for the infrastructure subnet.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```bash
+```azurecli-interactive
INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'` ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$InfrastructureSubnet=(Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet).Id ```
$InfrastructureSubnet=(Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -
Finally, create the Container Apps environment using the custom VNET deployed in the preceding steps.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
The following table describes the parameters used in `containerapp env create`.
A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
-```azurepowershell
+```azurepowershell-interactive
$WorkspaceArgs = @{ Name = 'myworkspace' ResourceGroupName = $ResourceGroupName
$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGrou
To create the environment, run the following command:
-```azurepowershell
+```azurepowershell-interactive
$EnvArgs = @{ EnvName = $ContainerAppsEnvironment ResourceGroupName = $ResourceGroupName
If you want to deploy your container app with a private DNS, run the following c
First, extract identifiable information from the environment.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```bash
+```azurecli-interactive
ENVIRONMENT_DEFAULT_DOMAIN=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query properties.defaultDomain --out json | tr -d '"'` ```
-```bash
+```azurecli-interactive
ENVIRONMENT_STATIC_IP=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query properties.staticIp --out json | tr -d '"'` ```
-```bash
+```azurecli-interactive
VNET_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query id --out json | tr -d '"'` ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
$EnvironmentDefaultDomain = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).DefaultDomain ```
-```azurepowershell
+```azurepowershell-interactive
$EnvironmentStaticIp = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).StaticIp ```
$EnvironmentStaticIp = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvi
Next, set up the private DNS.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az network private-dns zone create \ --resource-group $RESOURCE_GROUP \ --name $ENVIRONMENT_DEFAULT_DOMAIN ```
-```azurecli
+```azurecli-interactive
az network private-dns link vnet create \ --resource-group $RESOURCE_GROUP \ --name $VNET_NAME \
az network private-dns link vnet create \
--zone-name $ENVIRONMENT_DEFAULT_DOMAIN -e true ```
-```azurecli
+```azurecli-interactive
az network private-dns record-set a add-record \ --resource-group $RESOURCE_GROUP \ --record-set-name "*" \
az network private-dns record-set a add-record \
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
New-AzPrivateDnsZone -ResourceGroupName $ResourceGroupName -Name $EnvironmentDefaultDomain ```
-```azurepowershell
+```azurepowershell-interactive
New-AzPrivateDnsVirtualNetworkLink -ResourceGroupName $ResourceGroupName -Name $VnetName -VirtualNetwork $Vnet -ZoneName $EnvironmentDefaultDomain -EnableRegistration ```
-```azurepowershell
+```azurepowershell-interactive
$DnsRecords = @() $DnsRecords += New-AzPrivateDnsRecordConfig -Ipv4Address $EnvironmentStaticIp
There are three optional networking parameters you can choose to define when cal
You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the values are generated for you.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
| Parameter | Description | |||
If you're not going to continue to use this application, you can delete the Azur
>[!CAUTION] > The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this guide exist in the specified resource group, they will also be deleted.
-# [Bash](#tab/bash)
+# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az group delete --name $RESOURCE_GROUP ``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
+```azurepowershell-interactive
Remove-AzResourceGroup -Name $ResourceGroupName -Force ```
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
If you have an existing virtual network that meets these criteria, you can skip
1. Create the virtual network using the [az network vnet create][az-network-vnet-create] command. Enter address prefixes in Classless Inter-Domain Routing (CIDR) format (for example: `10.0.0.0/16`).
- ```azurecli
+ ```azurecli-interactive
az network vnet create \ --name aci-vnet \ --resource-group ACIResourceGroup \
If you have an existing virtual network that meets these criteria, you can skip
1. Create the subnet using the [az network vnet subnet create][az-network-vnet-subnet-create] command. The following command creates a subnet in your virtual network with a delegation that permits it to create container groups. For more information about working with subnets, see the [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md). For more information about subnet delegation, see the [Virtual Network Scenarios and Resources article section on delegated subnets](container-instances-virtual-network-concepts.md#subnet-delegated).
- ```azurecli
+ ```azurecli-interactive
az network vnet subnet create \ --name aci-subnet \ --resource-group ACIResourceGroup \
If you have an existing virtual network that meets these criteria, you can skip
1. Create the private DNS Zone using the [az network private-dns zone create][az-network-private-dns-zone-create] command.
- ```azurecli
+ ```azurecli-interactive
az network private-dns zone create -g ACIResourceGroup -n private.contoso.com ``` 1. Link the DNS zone to your virtual network using the [az network private-dns link vnet create][az-network-private-dns-link-vnet-create] command. The DNS server is only required to test name resolution. The `-e` flag enables automatic hostname registration, which is unneeded, so we set it to `false`.
- ```azurecli
+ ```azurecli-interactive
az network private-dns link vnet create \ -g ACIResourceGroup \ -n aciDNSLink \
type: Microsoft.ContainerInstance/containerGroups
Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name with the `--file` parameter:
-```azurecli
+```azurecli-interactive
az container create --resource-group ACIResourceGroup \ --file custom-dns-deploy-aci.yaml ``` Once the deployment is complete, run the [az container show][az-container-show] command to display its status. Sample output:
-```azurecli
+```azurecli-interactive
az container show --resource-group ACIResourceGroup --name pwsh-vnet-dns -o table ```
-```console
+```output
Name ResourceGroup Status Image IP:ports Network CPU/Memory OsType Location - -- -- -- - pwsh-vnet-dns ACIResourceGroup Running mcr.microsoft.com/powershell 10.0.0.5:80 Private 1.0 core/2.0 gb Linux westus
pwsh-vnet-dns ACIResourceGroup Running mcr.microsoft.com/powershell
After the status shows `Running`, execute the [az container exec][az-container-exec] command to obtain bash access within the container.
-```azurecli
+```azurecli-interactive
az container exec --resource-group ACIResourceGroup --name pwsh-vnet-dns --exec-command "/bin/bash" ``` Validate that DNS is working as expected from within your container. For example, read the `/etc/resolv.conf` file to ensure it's configured with the DNS settings provided in the YAML file.
-```console
+```bash
root@wk-caas-81d609b206c541589e11058a6d260b38-90b0aff460a737f346b3b0:/# cat /etc/resolv.conf nameserver 10.0.0.10
search contoso.com
When you're finished with the container instance you created, delete it with the [az container delete][az-container-delete] command:
-```azurecli
+```azurecli-interactive
az container delete --resource-group ACIResourceGroup --name pwsh-vnet-dns -y ```
az container delete --resource-group ACIResourceGroup --name pwsh-vnet-dns -y
If you don't plan to use this virtual network again, you can delete it with the [az network vnet delete][az-network-vnet-delete] command:
-```azurecli
+```azurecli-interactive
az network vnet delete --resource-group ACIResourceGroup --name aci-vnet ```
az network vnet delete --resource-group ACIResourceGroup --name aci-vnet
If you don't plan to use this resource group outside of this guide, you can delete it with [az group delete][az-group-delete] command:
-```azurecli
+```azurecli-interactive
az group delete --name ACIResourceGroup ```
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
In the GitHub workflow, you need to supply Azure credentials to authenticate to
First, get the resource ID of your resource group. Substitute the name of your group in the following [az group show][az-group-show] command:
-```azurecli
+```azurecli-interactive
groupId=$(az group show \ --name <resource-group-name> \ --query id --output tsv)
groupId=$(az group show \
Use [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] to create the service principal:
-```azurecli
+```azurecli-interactive
az ad sp create-for-rbac \ --scope $groupId \ --role Contributor \
OpenID Connect is an authentication method that uses short-lived tokens. Setting
* For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`. * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
- ```azurecli
+ ```azurecli-interactive
az ad app federated-credential create --id <APPLICATION-OBJECT-ID> --parameters credential.json ("credential.json" contains the following content) {
Update the Azure service principal credentials to allow push and pull access to
Get the resource ID of your container registry. Substitute the name of your registry in the following [az acr show][az-acr-show] command:
-```azurecli
+```azurecli-interactive
registryId=$(az acr show \ --name <registry-name> \ --resource-group <resource-group-name> \
registryId=$(az acr show \
Use [az role assignment create][az-role-assignment-create] to assign the AcrPush role, which gives push and pull access to the registry. Substitute the client ID of your service principal:
-```azurecli
+```azurecli-interactive
az role assignment create \ --assignee <ClientId> \ --scope $registryId \
You need to give your application permission to access the Azure Container Regis
1. Search for your OpenID Connect app registration and copy the **Application (client) ID**. 1. Grant permissions for your app to your resource group. You'll need to set permissions at the resource group level so that you can create Azure Container instances.
- ```azurecli
+ ```azurecli-interactive
az role assignment create \ --assignee <appID> \ --role Contributor \
See [Viewing workflow run history](https://docs.github.com/en/actions/managing-w
When the workflow completes successfully, get information about the container instance named *aci-sampleapp* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
-```azurecli
+```azurecli-interactive
az container show \ --resource-group <resource-group-name> \ --name aci-sampleapp \
az container show \
Output is similar to:
-```console
+```output
FQDN ProvisioningState - aci-action01.westus.azurecontainer.io Succeeded
In addition to the [prerequisites](#prerequisites) and [repo setup](#set-up-repo
Run the [az extension add][az-extension-add] command to install the extension:
-```azurecli
+```azurecli-interactive
az extension add \ --name deploy-to-azure ```
To run the [az container app up][az-container-app-up] command, provide at minimu
Sample command:
-```azurecli
+```azurecli-interactive
az container app up \ --acr myregistry \ --repository https://github.com/myID/acr-build-helloworld-node
az container app up \
Output is similar to:
-```console
+```output
[...] Checking in file github/workflows/main.yml in the GitHub repository myid/acr-build-helloworld-node Creating workflow...
To view the workflow status and results of each step in the GitHub UI, see [View
The workflow deploys an Azure container instance with the base name of your GitHub repo, in this case, *acr-build-helloworld-node*. When the workflow completes successfully, get information about the container instance named *acr-build-helloworld-node* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
-```azurecli
+```azurecli-interactive
az container show \ --resource-group <resource-group-name> \ --name acr-build-helloworld-node \
az container show \
Output is similar to:
-```console
+```output
FQDN ProvisioningState - acr-build-helloworld-node.westus.azurecontainer.io Succeeded
After the instance is provisioned, navigate to the container's FQDN in your brow
Stop the container instance with the [az container delete][az-container-delete] command:
-```azurecli
+```azurecli-interactive
az container delete \ --name <instance-name> --resource-group <resource-group-name>
az container delete \
To delete the resource group and all the resources in it, run the [az group delete][az-group-delete] command:
-```azurecli
+```azurecli-interactive
az group delete \ --name <resource-group-name> ```
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md
properties:
Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name for the `--file` parameter. You need to supply the name of a resource group and a location for the container group such as *eastus* that supports GPU resources.
-```azurecli
+```azurecli-interactive
az container create --resource-group myResourceGroup --file gpu-deploy-aci.yaml --location eastus ``` The deployment takes several minutes to complete. Then, the container starts and runs a CUDA vector addition operation. Run the [az container logs][az-container-logs] command to view the log output:
-```azurecli
+```azurecli-interactive
az container logs --resource-group myResourceGroup --name gpucontainergroup --container-name gpucontainer ``` Output:
-```Console
+```output
[Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads
az deployment group create --resource-group myResourceGroup --template-file gpud
The deployment takes several minutes to complete. Then, the container starts and runs the TensorFlow job. Run the [az container logs][az-container-logs] command to view the log output:
-```azurecli
+```azurecli-interactive
az container logs --resource-group myResourceGroup --name gpucontainergrouprm --container-name gpucontainer ``` Output:
-```Console
+```output
2018-10-25 18:31:10.155010: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2018-10-25 18:31:10.305937: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
Adding run metadata for 999
Because using GPU resources may be expensive, ensure that your containers don't run unexpectedly for long periods. Monitor your containers in the Azure portal, or check the status of a container group with the [az container show][az-container-show] command. For example:
-```azurecli
+```azurecli-interactive
az container show --resource-group myResourceGroup --name gpucontainergroup --output table ``` When you're done working with the container instances you created, delete them with the following commands:
-```azurecli
+```azurecli-interactive
az container delete --resource-group myResourceGroup --name gpucontainergroup -y az container delete --resource-group myResourceGroup --name gpucontainergrouprm -y ```
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
az container show \
The `identity` section in the output looks similar to the following, showing the identity is set in the container group. The `principalID` under `userAssignedIdentities` is the service principal of the identity you created in Azure Active Directory:
-```console
+```output
[...] "identity": { "principalId": "null",
az container show \
The `identity` section in the output looks similar to the following, showing that a system-assigned identity is created in Azure Active Directory:
-```console
+```output
[...] "identity": { "principalId": "xxxxxxxx-528d-7083-b74c-xxxxxxxxxxxx",
az container exec \
Run the following commands in the bash shell in the container. First log in to the Azure CLI using the managed identity:
-```azurecli
+```azurecli-interactive
az login --identity ``` From the running container, retrieve the secret from the key vault:
-```azurecli
+```azurecli-interactive
az keyvault secret show \ --name SampleSecret \ --vault-name mykeyvault --query value
container-instances Container Instances Readiness Probe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-readiness-probe.md
The deployment includes a `command` property defining a starting command that ru
First, it starts a shell session and runs a `node` command to start the web app. It also starts a command to sleep for 240 seconds, after which it creates a file called `ready` within the `/tmp` directory:
-```console
+```bash
node /usr/src/app/index.js & (sleep 240; touch /tmp/ready); wait ```
These events can be viewed from the Azure portal or Azure CLI. For example, the
After starting the container, you can verify that it's not accessible initially. After provisioning, get the IP address of the container group:
-```azurecli
+```azurecli-interactive
az container show --resource-group myResourceGroup --name readinesstest --query "ipAddress.ip" --out tsv ```
wget <ipAddress>
``` Output shows the site isn't accessible initially:
+```bash
+wget 192.0.2.1
```
-$ wget 192.0.2.1
+```output
--2019-10-15 16:46:02-- http://192.0.2.1/ Connecting to 192.0.2.1... connected. HTTP request sent, awaiting response...
HTTP request sent, awaiting response...
After 240 seconds, the readiness command succeeds, signaling the container is ready. Now, when you run the `wget` command, it succeeds:
+```bash
+wget 192.0.2.1
```
-$ wget 192.0.2.1
+```output
--2019-10-15 16:46:02-- http://192.0.2.1/ Connecting to 192.0.2.1... connected. HTTP request sent, awaiting response...200 OK
container-instances Container Instances Restart Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-restart-policy.md
az container show \
Example output:
-```bash
+```output
"Terminated" ```
az container logs --resource-group myResourceGroup --name mycontainer
Output:
-```bash
+```output
[('the', 990), ('and', 702), ('of', 628),
container-instances Container Instances Start Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-start-command.md
Once the container's state shows as *Terminated* (use [az container show][az-con
az container logs --resource-group myResourceGroup --name mycontainer1 ```
-Output:
-
-```console
+```output
[('HAMLET', 386), ('HORATIO', 127), ('CLAUDIUS', 120)] ```
Again, once the container is *Terminated*, view the output by showing the contai
az container logs --resource-group myResourceGroup --name mycontainer2 ```
-Output:
-
-```console
+```output
[('ROMEO', 177), ('JULIET', 134), ('CAPULET', 119)] ```
container-instances Container Instances Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-troubleshooting.md
To resolve this issue, delete the container instance and retry your deployment.
If the image can't be pulled, events like the following are shown in the output of [az container show][az-container-show]:
-```bash
+```json
"events": [ { "count": 3,
If your container takes a long time to start, but eventually succeeds, start by
You can view the size of your container image by using the `docker images` command in the Docker CLI:
-```console
-$ docker images
+```bash
+docker images
+```
+```output
REPOSITORY TAG IMAGE ID CREATED SIZE mcr.microsoft.com/azuredocs/aci-helloworld latest 7367f3256b41 15 months ago 67.6MB ```
Azure Container Instances doesn't yet support port mapping like with regular doc
If you want to confirm that Azure Container Instances can listen on the port you configured in your container image, test a deployment of the `aci-helloworld` image that exposes the port. Also run the `aci-helloworld` app so that it listens on the port. `aci-helloworld` accepts an optional environment variable `PORT` to override the default port 80 it listens on. For example, to test port 9000, set the [environment variable](container-instances-environment-variables.md) when you create the container group: 1. Set up the container group to expose port 9000, and pass the port number as the value of the environment variable. The example is formatted for the Bash shell. If you prefer another shell such as PowerShell or Command Prompt, you'll need to adjust variable assignment accordingly.
- ```azurecli
+ ```azurecli-interactive
az container create --resource-group myResourceGroup \ --name mycontainer --image mcr.microsoft.com/azuredocs/aci-helloworld \ --ip-address Public --ports 9000 \
If you want to confirm that Azure Container Instances can listen on the port you
You should see the "Welcome to Azure Container Instances!" message displayed by the web app. 1. When you're done with the container, remove it using the `az container delete` command:
- ```azurecli
+ ```azurecli-interactive
az container delete --resource-group myResourceGroup --name mycontainer ```
container-instances Container Instances Tutorial Azure Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-azure-function-trigger.md
The following commands enable a system-assigned [managed identity](../app-servic
[Add an identity](../app-service/overview-managed-identity.md?tabs=ps%2Cdotnet) to the function app:
-```powershell
+```azurepowershell-interactive
Update-AzFunctionApp -Name myfunctionapp ` -ResourceGroupName myfunctionapp ` -IdentityType SystemAssigned
Update-AzFunctionApp -Name myfunctionapp `
Assign the identity the contributor role scoped to the resource group:
-```powershell
+```azurepowershell-interactive
$SP=(Get-AzADServicePrincipal -DisplayName myfunctionapp).Id $RG=(Get-AzResourceGroup -Name myfunctionapp).ResourceId New-AzRoleAssignment -ObjectId $SP -RoleDefinitionName "Contributor" -Scope $RG
New-AzRoleAssignment -ObjectId $SP -RoleDefinitionName "Contributor" -Scope $RG
Modify the PowerShell code for the **HttpTrigger** function to create a container group. In file `run.ps1` for the function, find the following code block. This code displays a name value, if one is passed as a query string in the function URL:
-```powershell
+```azurepowershell-interactive
[...] if ($name) { $body = "Hello, $name. This HTTP triggered function executed successfully."
if ($name) {
Replace this code with the following example block. Here, if a name value is passed in the query string, it's used to name and create a container group using the [New-AzContainerGroup][new-azcontainergroup] cmdlet. Make sure to replace the resource group name *myfunctionapp* with the name of the resource group for your function app:
-```powershell
+```azurepowershell-interactive
[...] if ($name) { New-AzContainerGroup -ResourceGroupName myfunctionapp -Name $name `
After the deployment completes successfully, get the function URL. For example,
The function URL is of the form:
-```
+```config
https://myfunctionapp.azurewebsites.net/api/HttpTrigger ```
This HTTP triggered function executed successfully. Started container group myco
Verify that the container ran with the [Get-AzContainerInstanceLog][get-azcontainerinstancelog] command:
-```azurecli
+```azurecli-interactive
Get-AzContainerInstanceLog -ResourceGroupName myfunctionapp ` -ContainerGroupName mycontainergroup ``` Sample output:
-```console
+```output
Hello from an Azure container instance triggered by an Azure function ```
container-instances Container Instances Tutorial Deploy Confidential Containers Cce Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md
With the ARM template that you've crafted and the Azure CLI confcom extension, y
1. To generate the CCE policy, you'll run the following command using the ARM template as input:
- ```bash
+ ```azurecli-interactive
az confcom acipolicygen -a .\template.json --print-policy ``` When this command completes, you should see a Base 64 string generated as output in the format seen below. This string is the CCE policy that you will copy and paste into your ARM template under the ccePolicy property.
- ```bash
+ ```output
cGFja2FnZSBwb2xpY3kKCmFwaV9zdm4gOj0gIjAuOS4wIgoKaW1wb3J0IGZ1dHVyZS5rZXl3b3Jkcy5ldmVyeQppbXBvcnQgZnV0dXJlLmtleXdvcmRzLmluCgpmcmFnbWVudHMgOj0gWwpdCgpjb250YWluZXJzIDo9IFsKICAgIHsKICAgICAgICAiY29tbWFuZCI6IFsiL3BhdXNlIl0sCiAgICAgICAgImVudl9ydWxlcyI6IFt7InBhdHRlcm4iOiAiUEFUSD0vdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluOi9zYmluOi9iaW4iLCAic3RyYXRlZ3kiOiAic3RyaW5nIiwgInJlcXVpcmVkIjogdHJ1ZX0seyJwYXR0ZXJuIjogIlRFUk09eHRlcm0iLCAic3RyYXRlZ3kiOiAic3RyaW5nIiwgInJlcXVpcmVkIjogZmFsc2V9XSwKICAgICAgICAibGF5ZXJzIjogWyIxNmI1MTQwNTdhMDZhZDY2NWY5MmMwMjg2M2FjYTA3NGZkNTk3NmM3NTVkMjZiZmYxNjM2NTI5OTE2OWU4NDE1Il0sCiAgICAgICAgIm1vdW50cyI6IFtdLAogICAgICAgICJleGVjX3Byb2Nlc3NlcyI6IFtdLAogICAgICAgICJzaWduYWxzIjogW10sCiAgICAgICAgImFsbG93X2VsZXZhdGVkIjogZmFsc2UsCiAgICAgICAgIndvcmtpbmdfZGlyIjogIi8iCiAgICB9LApdCmFsbG93X3Byb3BlcnRpZXNfYWNjZXNzIDo9IHRydWUKYWxsb3dfZHVtcF9zdGFja3MgOj0gdHJ1ZQphbGxvd19ydW50aW1lX2xvZ2dpbmcgOj0gdHJ1ZQphbGxvd19lbnZpcm9ubWVudF92YXJpYWJsZV9kcm9wcGluZyA6PSB0cnVlCmFsbG93X3VuZW5jcnlwdGVkX3NjcmF0Y2ggOj0gdHJ1ZQoKCm1vdW50X2RldmljZSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQp1bm1vdW50X2RldmljZSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQptb3VudF9vdmVybGF5IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnVubW91bnRfb3ZlcmxheSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpjcmVhdGVfY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmV4ZWNfaW5fY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmV4ZWNfZXh0ZXJuYWwgOj0geyAiYWxsb3dlZCIgOiB0cnVlIH0Kc2h1dGRvd25fY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnNpZ25hbF9jb250YWluZXJfcHJvY2VzcyA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpwbGFuOV9tb3VudCA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpwbGFuOV91bm1vdW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmdldF9wcm9wZXJ0aWVzIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmR1bXBfc3RhY2tzIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnJ1bnRpbWVfbG9nZ2luZyA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpsb2FkX2ZyYWdtZW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnNjcmF0Y2hfbW91bnQgOj0geyAiYWxsb3dlZCIgOiB0cnVlIH0Kc2NyYXRjaF91bm1vdW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnJlYXNvbiA6PSB7ImVycm9ycyI6IGRhdGEuZnJhbWV3b3JrLmVycm9yc30K ```
container-instances Container Instances Tutorial Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-prepare-acr.md
To push a container image to a private registry like Azure Container Registry, y
First, get the full login server name for your Azure container registry. Run the following [az acr show][az-acr-show] command, and replace `<acrName>` with the name of registry you just created:
-```azurecli
+```azurecli-interactive
az acr show --name <acrName> --query loginServer --output table ``` For example, if your registry is named *mycontainerregistry082*:
-```azurecli
+```azurecli-interactive
az acr show --name mycontainerregistry082 --query loginServer --output table ```
docker images
Along with any other images you have on your machine, you should see the *aci-tutorial-app* image you built in the [previous tutorial](container-instances-tutorial-prepare-app.md):
-```console
-$ docker images
+```bash
+docker images
+```
+```output
REPOSITORY TAG IMAGE ID CREATED SIZE aci-tutorial-app latest 5c745774dfa9 39 minutes ago 68.1 MB ```
docker tag aci-tutorial-app <acrLoginServer>/aci-tutorial-app:v1
Run `docker images` again to verify the tagging operation:
-```console
-$ docker images
+```bash
+docker images
+```
+```output
REPOSITORY TAG IMAGE ID CREATED SIZE aci-tutorial-app latest 5c745774dfa9 39 minutes ago 68.1 MB mycontainerregistry082.azurecr.io/aci-tutorial-app v1 5c745774dfa9 7 minutes ago 68.1 MB
docker push <acrLoginServer>/aci-tutorial-app:v1
The `push` operation should take a few seconds to a few minutes depending on your internet connection, and output is similar to the following:
-```console
-$ docker push mycontainerregistry082.azurecr.io/aci-tutorial-app:v1
+```bash
+docker push mycontainerregistry082.azurecr.io/aci-tutorial-app:v1
+```
+```output
The push refers to a repository [mycontainerregistry082.azurecr.io/aci-tutorial-app] 3db9cac20d49: Pushed 13f653351004: Pushed
v1: digest: sha256:ed67fff971da47175856505585dcd92d1270c3b37543e8afd46014d328f05
To verify that the image you just pushed is indeed in your Azure container registry, list the images in your registry with the [az acr repository list][az-acr-repository-list] command. Replace `<acrName>` with the name of your container registry.
-```azurecli
+```azurecli-interactive
az acr repository list --name <acrName> --output table ``` For example:
-```azurecli
+```azurecli-interactive
az acr repository list --name mycontainerregistry082 --output table ```
aci-tutorial-app
To see the *tags* for a specific image, use the [az acr repository show-tags][az-acr-repository-show-tags] command.
-```azurecli
+```azurecli-interactive
az acr repository show-tags --name <acrName> --repository aci-tutorial-app --output table ``` You should see output similar to the following:
-```console
-Result
+```output
-- v1 ```
container-instances Container Instances Tutorial Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-prepare-app.md
The sample application in this tutorial is a simple web app built in [Node.js][n
Use Git to clone the sample application's repository:
-```bash
+```git
git clone https://github.com/Azure-Samples/aci-helloworld.git ```
docker build ./aci-helloworld -t aci-tutorial-app
Output from the [docker build][docker-build] command is similar to the following (truncated for readability):
-```console
-$ docker build ./aci-helloworld -t aci-tutorial-app
+```bash
+docker build ./aci-helloworld -t aci-tutorial-app
+```
+```output
Sending build context to Docker daemon 119.3kB Step 1/6 : FROM node:8.9.3-alpine 8.9.3-alpine: Pulling from library/node
docker images
Your newly built image should appear in the list:
-```console
-$ docker images
+```bash
+docker images
+```
+```output
REPOSITORY TAG IMAGE ID CREATED SIZE aci-tutorial-app latest 5c745774dfa9 39 seconds ago 68.1 MB ```
docker run -d -p 8080:80 aci-tutorial-app
Output from the `docker run` command displays the running container's ID if the command was successful:
-```console
-$ docker run -d -p 8080:80 aci-tutorial-app
+```bash
+docker run -d -p 8080:80 aci-tutorial-app
+```output
a2e3e4435db58ab0c664ce521854c2e1a1bda88c9cf2fcff46aedf48df86cccf ```
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
Once you've deployed your first container group with this method, you can deploy
The following [az container create][az-container-create] command specifies settings for a new virtual network and subnet. Provide the name of a resource group that was created in a region where container group deployments in a virtual network are [available](container-instances-region-availability.md). This command deploys the public Microsoft [aci-helloworld][aci-helloworld] container that runs a small Node.js webserver serving a static web page. In the next section, you'll deploy a second container group to the same subnet, and test communication between the two container instances.
-```azurecli
+```azurecli-interactive
az container create \ --name appcontainer \ --resource-group myResourceGroup \
The following example deploys a second container group to the same subnet create
First, get the IP address of the first container group you deployed, the *appcontainer*:
-```azurecli
+```azurecli-interactive
az container show --resource-group myResourceGroup \ --name appcontainer \ --query ipAddress.ip --output tsv
az container show --resource-group myResourceGroup \
The output displays the IP address of the container group in the private subnet. For example:
-```console
+```output
10.0.0.4 ``` Now, set `CONTAINER_GROUP_IP` to the IP you retrieved with the `az container show` command, and execute the following `az container create` command. This second container, *commchecker*, runs an Alpine Linux-based image and executes `wget` against the first container group's private subnet IP address.
-```azurecli
+```azurecli-interactive
CONTAINER_GROUP_IP=<container-group-IP-address> az container create \
az container create \
After this second container deployment has completed, pull its logs so you can see the output of the `wget` command it executed:
-```azurecli
+```azurecli-interactive
az container logs --resource-group myResourceGroup --name commchecker ``` If the second container communicated successfully with the first, output is similar to:
-```console
+```output
Connecting to 10.0.0.4 (10.0.0.4:80) https://docsupdatetracker.net/index.html 100% |*******************************| 1663 0:00:00 ETA ```
type: Microsoft.ContainerInstance/containerGroups
Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name for the `--file` parameter:
-```azurecli
+```azurecli-interactive
az container create --resource-group myResourceGroup \ --file vnet-deploy-aci.yaml ``` Once the deployment completes, run the [az container show][az-container-show] command to display its status. Sample output:
-```console
+```output
Name ResourceGroup Status Image IP:ports Network CPU/Memory OsType Location - -- -- -- - appcontaineryaml myResourceGroup Running mcr.microsoft.com/azuredocs/aci-helloworld 10.0.0.5:80 Private 1.0 core/1.5 gb Linux westus
appcontaineryaml myResourceGroup Running mcr.microsoft.com/azuredocs/aci-hel
When you're done working with the container instances you created, delete them with the following commands:
-```azurecli
+```azurecli-interactive
az container delete --resource-group myResourceGroup --name appcontainer -y az container delete --resource-group myResourceGroup --name commchecker -y az container delete --resource-group myResourceGroup --name appcontaineryaml -y
Before executing the script, set the `RES_GROUP` variable to the name of the res
> [!WARNING] > This script deletes resources! It deletes the virtual network and all subnets it contains. Be sure that you no longer need *any* of the resources in the virtual network, including any subnets it contains, prior to running this script. Once deleted, **these resources are unrecoverable**.
-```azurecli
+```azurecli-interactive
# Replace <my-resource-group> with the name of your resource group # Assumes one virtual network in resource group RES_GROUP=<my-resource-group>
container-instances Container State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-state.md
This value is the state of the last operation performed on a container group. Ge
> [!IMPORTANT] > Additionally, users should not create dependencies on non-terminal provisioning states. Dependencies on **Succeeded** and **Failed** states are acceptable.
-In addition to the JSON view, provisioning state can be also be found in the [response body of the HTTP call](/rest/api/container-instances/containergroups/createorupdate#response).
+In addition to the JSON view, provisioning state can be also be found in the [response body of the HTTP call](/rest/api/container-instances/2022-09-01/container-groups/create-or-update#response).
### Create, start, and restart operations
These values are applicable to POST (stop) and DELETE (delete) events.
- **Succeeded**: The operation to stop or delete the container group completed successfully. -- **Failed**: The container group failed to reach the **Succeeded** provisioning state, meaning the stop/delete event did not complete. More information on the failure can be found under `events` in the JSON view.
+- **Failed**: The container group failed to reach the **Succeeded** provisioning state, meaning the stop/delete event did not complete. More information on the failure can be found under `events` in the JSON view.
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Azure Cosmos DB uses HMAC for authorization. You can use either a primary key, o
## Limits for autoscale provisioned throughput
-See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article and [FAQ](autoscale-faq.yml#lowering-the-max-ru-s) for more detailed explanation of the throughput and storage limits with autoscale.
+See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article and [FAQ](autoscale-faq.yml#how-do-i-lower-the maximum-ru-s) for more detailed explanation of the throughput and storage limits with autoscale.
| Resource | Limit | | | |
See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
| Current RU/s the system is scaled to | `0.1*Tmax <= T <= Tmax`, based on usage| | Minimum billable RU/s per hour| `0.1 * Tmax` <br></br>Billing is done on a per-hour basis, where you're billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. | | Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10)` rounded to nearest 1000 RU/s |
-| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
+| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded up to the nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).|
## SQL query limits
In addition to the previous table, the [Per-account limits](#per-account-limits)
* Read more about [global distribution](distribute-data-globally.md) * Read more about [partitioning](partitioning-overview.md) and [provisioned throughput](request-units.md).++
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Azure Cosmos DB for MongoDB vCore supports the following indexes and index prope
| `Multikey Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes | | `Text Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No | | `Geospatial Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `Hashed Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
+| `Hashed Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
### Index properties
cosmos-db Provision Throughput Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-throughput-autoscale.md
For any value of `Tmax`, the database or container can store a total of `0.1 * T
For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 5000 GB of data. If you exceed 500 GB - e.g. storage is now 6000 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s).
-When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 1000 (scales between 100 - 1000 RU/s), as long as you don't exceed 100 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-max-ru-s-on-the-database-or-container--) for more information.
+When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 1000 (scales between 100 - 1000 RU/s), as long as you don't exceed 100 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-maximum-ru-s-on-a-database-or-container--) for more information.
## Comparison ΓÇô containers configured with manual vs autoscale throughput For more detail, see this [documentation](how-to-choose-offer.md) on how to choose between standard (manual) and autoscale throughput.
cosmos-db Scaling Provisioned Throughput Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scaling-provisioned-throughput-best-practices.md
Increase your RU/s to: `10,000 * P * (2 ^ (ROUNDUP(LOG_2 (S/(10,000 * P))))`. Th
For example, suppose we have five physical partitions, 50,000 RU/s and want to scale to 150,000 RU/s. We should first set: `10,000 * 5 * (2 ^ (ROUND(LOG_2(150,000/(10,000 * 5))))` = 200,000 RU/s, and then lower to 150,000 RU/s.
-When we scaled up to 200,000 RU/s, the lowest manual RU/s we can now set in the future is 2000 RU/s. The [lowest autoscale max RU/s](autoscale-faq.yml#lowering-the-max-ru-s) we can set is 20,000 RU/s (scales between 2000 - 20,000 RU/s). Since our target RU/s is 150,000 RU/s, we are not affected by the minimum RU/s.
+When we scaled up to 200,000 RU/s, the lowest manual RU/s we can now set in the future is 2000 RU/s. The [lowest autoscale max RU/s](autoscale-faq.yml#how-do-i-lower-the maximum-ru-s) we can set is 20,000 RU/s (scales between 2000 - 20,000 RU/s). Since our target RU/s is 150,000 RU/s, we are not affected by the minimum RU/s.
## How to optimize RU/s for large data ingestion
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 04/14/2023 Last updated : 05/03/2023
You can also exchange a reservation to purchase another reservation of a similar
> > You may [trade-in](../savings-plan/reservation-trade-in.md) your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration youΓÇÖll need and want additional savings. Learn more about [Azure savings plan for compute and how it works with reservations](../savings-plan/index.yml).
-When you exchange a reservation, you can change your term from one-year to three-year.
+When you exchange a reservation, you can change your term from one-year to three-year. Or, you can change the term from three-year to one-year.
You can also refund reservations, but the sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement, and Microsoft Partner Agreement) can't exceed USD 50,000 in a 12 month rolling window.
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
description: Azure Hybrid Benefit is a licensing benefit that lets you bring you
keywords: Previously updated : 04/28/2023 Last updated : 05/03/2023
To use centrally managed licenses, you must have a specific role assigned to you
- Enterprise Agreement - Enterprise Administrator
- If you're not an Enterprise admin, your organization must assign you that role with full access. For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator).
+ If you're not an Enterprise admin, you need to contact one and either:
+ - Have them give you the enterprise administrator role with full access.
+ - Contact your Microsoft account team to have them identify your primary enterprise administrator.
+ For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator).
- Microsoft Customer Agreement - Billing account owner - Billing account contributor
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
Title: Transform data from an SAP ODP source with the SAP CDC connector in Azure Data Factory or Azure Synapse Analytics
-description: Learn how to transform data from an SAP ODP source to supported sink data stores by using mapping data flows in Azure Data Factory or Azure Synapse Analytics.
+description: Learn how to transform data from an SAP ODP source by using mapping data flows in Azure Data Factory or Azure Synapse Analytics.
Last updated 04/14/2023
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use mapping data flow to transform data from an SAP ODP source using the SAP CDC connector. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md). For an introduction to transforming data with Azure Data Factory and Azure Synapse analytics, read [mapping data flow](concepts-data-flow-overview.md).
+This article outlines how to use mapping data flow to transform data from an SAP ODP source using the SAP CDC connector. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md). For an introduction to transforming data with Azure Data Factory and Azure Synapse analytics, read [mapping data flow](concepts-data-flow-overview.md) or the [tutorial on mapping data flow](tutorial-data-flow.md).
>[!TIP] >To learn the overall support on SAP data integration scenario, see [SAP data integration using Azure Data Factory whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparsion and guidance.
This SAP CDC connector is supported for the following capabilities:
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-This SAP CDC connector leverages the SAP ODP framework to extract data from SAP source systems. For an introduction to the architecture of the solution, read [Introduction and architecture to SAP change data capture (CDC)](sap-change-data-capture-introduction-architecture.md) in our [SAP knowledge center](industry-sap-overview.md).
+This SAP CDC connector uses the SAP ODP framework to extract data from SAP source systems. For an introduction to the architecture of the solution, read [Introduction and architecture to SAP change data capture (CDC)](sap-change-data-capture-introduction-architecture.md) in our [SAP knowledge center](industry-sap-overview.md).
-The SAP ODP framework is contained in most SAP NetWeaver based systems, including SAP ECC, SAP S/4HANA, SAP BW, SAP BW/4HANA, SAP LT Replication Server (SLT), except very old ones. For prerequisites and minimum required releases, see [Prerequisites and configuration](sap-change-data-capture-prerequisites-configuration.md#sap-system-requirements).
+The SAP ODP framework is contained in all up-to-date SAP NetWeaver based systems, including SAP ECC, SAP S/4HANA, SAP BW, SAP BW/4HANA, SAP LT Replication Server (SLT). For prerequisites and minimum required releases, see [Prerequisites and configuration](sap-change-data-capture-prerequisites-configuration.md#sap-system-requirements).
The SAP CDC connector supports basic authentication or Secure Network Communications (SNC), if SNC is configured.
To prepare an SAP CDC dataset, follow [Prepare the SAP CDC source dataset](sap-c
## Transform data with the SAP CDC connector
-SAP CDC datasets can be used as source in mapping data flow. Since the raw SAP ODP change feed is difficult to interpret and to correctly update to a sink, mapping data flow takes care of this by evaluating technical attributes provided by the ODP framework (e.g., ODQ_CHANGEMODE) automatically. This allows users to concentrate on the required transformation logic without having to bother with the internals of the SAP ODP change feed, the right order of changes, etc.
+SAP CDC datasets can be used as source in mapping data flow. The raw SAP ODP change feed is difficult to interpret and updating it correctly to a sink can be a challenge. Mapping data flow takes care of this complexity by automatically evaluating technical attributes that are provided by the ODP framework (like ODQ_CHANGEMODE). Users can therefore concentrate on the required transformation logic without having to bother with the internals of the SAP ODP change feed, the right order of changes, etc.
To get started, create a pipeline with a mapping data flow. :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-pipeline-dataflow-activity.png" alt-text="Screenshot of add data flow activity in pipeline.":::
-Next, specify a staging folder in Azure Data Lake Gen2, which will serve as an intermediate storage for data extracted from SAP.
+Next, specify a staging linked service and staging folder in Azure Data Lake Gen2, which serves as an intermediate storage for data extracted from SAP.
+
+ >[!NOTE]
+ >The staging linked service cannot use a self-hosted integration runtime.
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-staging-folder.png" alt-text="Screenshot of specify staging folder in data flow activity.":::
To create a mapping data flow using the SAP CDC connector as a source, complete
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-add-source.png" alt-text="Screenshot of add source in mapping data flow.":::
-1. On the tab **Source settings** select a prepared SAP CDC dataset or select the **New** button to create a new one. Alternatively, you can also select **Inline** in the **Source type** property and continue without defining an explicit dataset.
+1. On the tab **Source settings**, select a prepared SAP CDC dataset or select the **New** button to create a new one. Alternatively, you can also select **Inline** in the **Source type** property and continue without defining an explicit dataset.
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-select-dataset.png" alt-text="Screenshot of the select dataset option in source settings of mapping data flow source.":::
-1. On the tab **Source options** select the option **Full on every run** if you want to load full snapshots on every execution of your mapping data flow, or **Full on the first run, then incremental** if you want to subscribe to a change feed from the SAP source system. In this case, the first run of your pipeline will do a delta initialization, which means it will return a current full data snapshot and create an ODP delta subscription in the source system so that with subsequent runs, the SAP source system will return incremental changes since the previous run only. You can also do **incremental changes only** if you want to create an ODP delta subscription in the SAP source system in the first run of your pipeline without returning any data, and with subsequent runs, the SAP source system will return incremental changes since the previous run only. In case of incremental loads it is required to specify the keys of the ODP source object in the **Key columns** property.
+1. On the tab **Source options**, select the option **Full on every run** if you want to load full snapshots on every execution of your mapping data flow. Select **Full on the first run, then incremental** if you want to subscribe to a change feed from the SAP source system including an initial full data snapshot. In this case, the first run of your pipeline performs a delta initialization, which means it creates an ODP delta subscription in the source system and returns a current full data snapshot. Subsequent pipeline runs only return incremental changes since the preceding run. The option **incremental changes only** creates an ODP delta subscription without returning an initial full data snapshot in the first run. Again, subsequent runs return incremental changes since the preceding run only. Both incremental load options require to specify the keys of the ODP source object in the **Key columns** property.
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-run-mode.png" alt-text="Screenshot of the run mode property in source options of mapping data flow source."::: :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-key-columns.png" alt-text="Screenshot of the key columns selection in source options of mapping data flow source.":::
-1. For the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md).
+1. For the tabs **Projection**, **Optimize** and **Inspect**, follow [mapping data flow](concepts-data-flow-overview.md).
+
+### Optimizing performance of full or initial loads with source partitioning
-1. If **Run mode** is set to **Full on every run** or **Full on the first run, then incremental**, the tab **Optimize** offers additional selection and partitioning options. Each partition condition (the screenshot below shows an example with two conditions) will trigger a separate extraction process in the connected SAP system. Up to three of these extraction process are executed in parallel.
+If **Run mode** is set to **Full on every run** or **Full on the first run, then incremental**, the tab **Optimize** offers a selection and partitioning type called **Source**. This option allows you to specify multiple partition (that is, filter) conditions to chunk a large source data set into multiple smaller portions. For each partition, the SAP CDC connector triggers a separate extraction process in the SAP source system.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-optimize-partition.png" alt-text="Screenshot of the partitioning options in optimize of mapping data flow source.":::
+If partitions are equally sized, source partitioning can linearly increase the throughput of data extraction. To achieve such performance improvements, sufficient resources are required in the SAP source system, the virtual machine hosting the self-hosted integration runtime, and the Azure integration runtime.
data-factory Industry Sap Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-templates.md
See [pipeline templates](solution-templates-introduction.md) for an overview of
The following table shows the templates related to SAP connectors that can be found in the Azure Data Factory template gallery:
-| SAP Data Store | Scenario | Description |
+| SAP Connector/Data Store | Scenario | Description |
| -- | -- | -- |
+| SAP CDC | [Replicate multiple objects from SAP via SAP CDC](solution-template-replicate-multiple-objects-sap-cdc.md) | Use this template for metadata driven incremental loads from multiple SAP ODP sources to Delta tables in ADLS Gen 2 |
| SAP BW via Open Hub | [Incremental copy to Azure Data Lake Storage Gen 2](load-sap-bw-data.md) | Use this template to incrementally copy SAP BW data via LastRequestID watermark to ADLS Gen 2 | | SAP HANA | Dynamically copy tables to Azure Data Lake Storage Gen 2 | Use this template to do a full copy of list of tables from SAP HANA to ADLS Gen 2 | | SAP Table | Incremental copy to Azure Blob Storage | Use this template to incrementally copy SAP Table data via a date timestamp watermark to Azure Blob Storage |
data-lake-store Data Lake Store Data Transfer Sql Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-transfer-sql-sqoop.md
Before you begin, you must have the following:
**Create Table1**
- ```tsql
+ ```sql
CREATE TABLE [dbo].[Table1]( [ID] [int] NOT NULL, [FName] [nvarchar](50) NOT NULL,
Before you begin, you must have the following:
**Create Table2**
- ```tsql
+ ```sql
CREATE TABLE [dbo].[Table2]( [ID] [int] NOT NULL, [FName] [nvarchar](50) NOT NULL,
Before you begin, you must have the following:
1. Run the following command to add some sample data to **Table1**. Leave **Table2** empty. Later, you'll import data from **Table1** into Data Lake Storage Gen1. Then, you'll export data from Data Lake Storage Gen1 into **Table2**.
- ```tsql
+ ```sql
INSERT INTO [dbo].[Table1] VALUES (1,'Neal','Kell'), (2,'Lila','Fulton'), (3, 'Erna','Myers'), (4,'Annette','Simpson'); ```
An HDInsight cluster already has the Sqoop packages available. If you've configu
1. Verify that the data was uploaded to the SQL Database table. Use [SQL Server Management Studio](/azure/azure-sql/database/connect-query-ssms) or Visual Studio to connect to the Azure SQL Database and then run the following query.
- ```tsql
+ ```sql
SELECT * FROM TABLE2 ```
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
The Azure Monitor Agent requires more extensions. The ASA extension, which suppo
When you auto-provision the Log Analytics agent in Defender for Cloud, you can choose to collect additional security events to the workspace. When you auto-provision the Azure Monitor agent in Defender for Cloud, the option to collect additional security events to the workspace isn't available. Defender for Cloud doesn't rely on these security events, but they can be helpful for investigations through Microsoft Sentinel.
-If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](../azure-monitor/essentials/data-collection-rule-overview.md) to collect the required events.
+If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](../azure-monitor/essentials/data-collection-rule-overview.md) to collect the required events. Learn [how do it with PowerShell or with Azure Policy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/how-to-configure-security-events-collection-with-azure-monitor/ba-p/3770719).
Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500-MB of free data](plan-defender-for-servers-data-workspace.md#log-analytics-pricing-faq) daily on defined data types that include security events.
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Previously updated : 11/09/2021 Last updated : 05/03/2023
-# Deploy a bring your own license (BYOL) vulnerability assessment solution
+# Deploy a Bring Your Own License (BYOL) vulnerability assessment solution
If you've enabled **Microsoft Defender for Servers**, you're able to use Microsoft Defender for Cloud's built-in vulnerability assessment tool as described in [Integrated Qualys vulnerability scanner for virtual machines](./deploy-vulnerability-assessment-vm.md). This tool is integrated into Defender for Cloud and doesn't require any external licenses - everything's handled seamlessly inside Defender for Cloud. In addition, the integrated scanner supports Azure Arc-enabled machines.
Alternatively, you might want to deploy your own privately licensed vulnerabilit
The BYOL options refer to supported third-party vulnerability assessment solutions. Currently both Qualys and Rapid7 are supported providers.
-Supported solutions report vulnerability data to the partner's management platform. In turn, that platform provides vulnerability and health monitoring data back to Defender for Cloud. You can identify vulnerable VMs on the workload protection dashboard and switch to the partner management console directly from Defender for Cloud for reports and more information.
+Supported solutions report vulnerability data to the partner's management platform. In turn, that platform provides vulnerability and health monitoring data back to Defender for Cloud. You can identify vulnerable VMs on the workload protection dashboard and switch to the partner management console, directly from Defender for Cloud for reports and more information.
1. From the [Azure portal](https://azure.microsoft.com/features/azure-portal/), open **Defender for Cloud**.
Supported solutions report vulnerability data to the partner's management platfo
:::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-page-machine-groupings.png" alt-text="The groupings of the machines in the **A vulnerability assessment solution should be enabled on your virtual machines** recommendation page" lightbox="./media/deploy-vulnerability-assessment-vm/recommendation-page-machine-groupings.png":::
- Your VMs will appear in one or more of the following groups:
+ Your VMs appear in one or more of the following groups:
* **Healthy resources** ΓÇô Defender for Cloud has detected a vulnerability assessment solution running on these VMs. * **Unhealthy resources** ΓÇô A vulnerability scanner extension can be deployed to these VMs.
Supported solutions report vulnerability data to the partner's management platfo
1. From the list of unhealthy machines, select the ones to receive a vulnerability assessment solution and select **Remediate**.
- >[!IMPORTANT]
- > Depending on your configuration, you might only see a subset of this list.
- >
- > - If you haven't got a third-party vulnerability scanner configured, you won't be offered the opportunity to deploy it.
- > - If your selected VMs aren't protected by Microsoft Defender for Servers, the Defender for Cloud integrated vulnerability scanner option will be unavailable.
+ > [!IMPORTANT]
+ > Depending on your configuration, you might only see a subset of this list.
+ > - If you haven't got a third-party vulnerability scanner configured, you won't be offered the opportunity to deploy it.
+ > - If your selected VMs aren't protected by Microsoft Defender for Servers, the Defender for Cloud integrated vulnerability scanner option will be unavailable.
- :::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-remediation-options.png" alt-text="The options for which type of remediation flow you want to choose when responding to the recommendation **A vulnerability assessment solution should be enabled on your virtual machines** recommendation page":::
+ :::image type="content" source="media/deploy-vulnerability-assessment-vm/select-vulnerability-solution.png" alt-text="Screenshot of the solutions screen after you have selected the fix button for your resource.":::
1. If you're setting up a new BYOL configuration, select **Configure a new third-party vulnerability scanner**, select the relevant extension, select **Proceed**, and enter the details from the provider as follows:
Supported solutions report vulnerability data to the partner's management platfo
1. To automatically install this vulnerability assessment agent on all discovered VMs in the subscription of this solution, select **Auto deploy**. 1. Select **OK**.
-1. If you've already set up your BYOL solution, select **Deploy your configured third-party vulnerability scanner**, select the relevant extension, and select **Proceed**.
+1. If you have already set up your BYOL solution, select **Deploy your configured third-party vulnerability scanner**, select the relevant extension, and select **Proceed**.
After the vulnerability assessment solution is installed on the target machines, Defender for Cloud runs a scan to detect and identify vulnerabilities in the system and application. It might take a couple of hours for the first scan to complete. After that, it runs hourly.
To run the script, you'll need the relevant information for the parameters below
| **Parameter** | **Required** | **Notes** | |-|:-:|-| |**SubscriptionId**|Γ£ö|The subscriptionID of the Azure Subscription that contains the resources you want to analyze.|
-|**ResourceGroupName**|Γ£ö|Name of the resource group. Use any existing resource group including the default ("DefaultResourceGroup-xxx").<br>Since the solution isn't an Azure resource, it won't be listed under the resource group, but it's still attached to it. If you later delete the resource group, the BYOL solution will be unavailable.|
+|**ResourceGroupName**|Γ£ö|Name of the resource group. Use any existing resource group including the default ("DefaultResourceGroup-xxx").<br>Since the solution isn't an Azure resource, it won't be listed under the resource group, but it's still attached to it. If you later delete the resource group, the BYOL solution is unavailable.|
|**vaSolutionName**|Γ£ö|The name of the new solution.| |**vaType**|Γ£ö|Qualys or Rapid7.| |**licenseCode**|Γ£ö|Vendor provided license string.| |**publicKey**|Γ£ö|Vendor provided public key.| |**autoUpdate**|-|Enable (true) or disable (false) auto deploy for this VA solution. When enabled, every new VM on the subscription will automatically attempt to link to the solution.<br/>(Default: False)| - Syntax: ```azurepowershell
Example (this example doesn't include valid license details):
The Qualys Cloud Agent is designed to communicate with Qualys's SOC at regular intervals for updates, and to perform the various operations required for product functionality. To allow the agent to communicate seamlessly with the SOC, configure your network security to allow inbound and outbound traffic to the Qualys SOC CIDR and URLs.
-There are multiple Qualys platforms across various geographic locations. The SOC CIDR and URLs will differ depending on the host platform of your Qualys subscription. To identify your Qualys host platform, use this page https://www.qualys.com/platform-identification/.
-
+There are multiple Qualys platforms across various geographic locations. The SOC CIDR and URLs differ depending on the host platform of your Qualys subscription. To identify your Qualys host platform, use this page https://www.qualys.com/platform-identification/.
### Why do I have to specify a resource group when configuring a BYOL solution?
-When you set up your solution, you must choose a resource group to attach it to. The solution isn't an Azure resource, so it won't be included in the list of the resource groupΓÇÖs resources. Nevertheless, it's attached to that resource group. If you later delete the resource group, the BYOL solution will be unavailable.
+When you set up your solution, you must choose a resource group to attach it to. The solution isn't an Azure resource, so it won't be included in the list of the resource groupΓÇÖs resources. Nevertheless, it's attached to that resource group. If you later delete the resource group, the BYOL solution is unavailable.
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
If you have machines in the **not applicable** resources group, Defender for Clo
| Microsoft | Windows | All | | Amazon | Amazon Linux | 2015.09-2018.03 | | Amazon | Amazon Linux 2 | 2017.03-2.0.2021 |
- | Red Hat | Enterprise Linux | 5.4+, 6, 7-7.9, 8-8.5, 9 beta |
+ | Red Hat | Enterprise Linux | 5.4+, 6, 7-7.9, 8-8.6, 9 beta |
| Red Hat | CentOS | 5.4-5.11, 6-6.7, 7-7.8, 8-8.5 | | Red Hat | Fedora | 22-33 | | SUSE | Linux Enterprise Server (SLES) | 11, 12, 15, 15 SP1 |
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
To remove the Defender for Endpoint solution from your machines:
1. Disable the integration: 1. From Defender for Cloud's menu, select **Environment settings** and select the subscription with the relevant machines.
- 1. In the Monitoring coverage column of the Defender for Servers plan, select **Settings**.
+ 1. In the Defender plans page, select **Settings & Monitoring**.
1. In the status of the Endpoint protection component, select **Off** to disable the integration with Microsoft Defender for Endpoint. 1. Select **Continue** and **Save** to save your settings.
defender-for-cloud Plan Defender For Servers Data Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-data-workspace.md
When you enable the Servers plan on the subscription level, Defender for Cloud w
However, if you're using a custom workspace in place of the default workspace, you'll need to enable the Servers plan on all of your custom workspaces that don't have it enabled.
-If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation will appear on the Recommendations page. This recommendation will give you the option to enable the servers plan on the workspace level with the Fix button. You're charged for all VMs in the subscription even if the Servers plan isn't enabled for the workspace. The VMs won't benefit from features that depend on the Log Analytics workspace, such as Microsoft Defender for Endpoint, VA solution (TVM/Qualys), and Just-in-Time VM access.
+If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation will appear on the Recommendations page. This recommendation will give you the option to enable the servers plan on the workspace level with the Fix button. You're charged for all VMs in the subscription even if the Servers plan isn't enabled for the workspace. The VMs won't benefit from features that depend on the Log Analytics workspace, such as Microsoft Defender for Endpoint, VA solution (MDVM/Qualys), and Just-in-Time VM access.
Enabling the Servers plan on both the subscription and its connected workspaces, won't incur a double charge. The system will identify each unique VM.
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
When deploying the CloudFormation template, the Stack creation wizard offers the
1. **Upload a template file** ΓÇô AWS will automatically create an S3 bucket that the CloudFormation template will be saved to. The automation for the S3 bucket will have a security misconfiguration that will cause the `S3 buckets should require requests to use Secure Socket Layer` recommendation to appear. You can remediate this recommendation by applying the following policy:
- ```bash
+ ```json
{ΓÇ» ΓÇ» "Id": "ExamplePolicy",ΓÇ» ΓÇ» "Version": "2012-10-17",ΓÇ»
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To have full visibility to Microsoft Defender for Servers security content, ensu
- Other extensions should be enabled on the Arc-connected machines. - Microsoft Defender for Endpoint
- - VA solution (TVM/ Qualys)
+ - VA solution (Microsoft Defender Vulnerability Management/ Qualys)
- Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA). Ensure the selected workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings regarding the LA agent and AMA.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 04/20/2023 Last updated : 05/02/2023 # Important upcoming changes to Microsoft Defender for Cloud
Last updated 04/20/2023
> [!IMPORTANT] > The information on this page relates to pre-release products or features, which may be substantially modified before they are commercially released, if ever. Microsoft makes no commitments or warranties, express or implied, with respect to the information provided here.
-On this page, you'll learn about changes that are planned for Defender for Cloud. It describes planned modifications to the product that might affect things like your secure score or workflows.
+On this page, you can learn about changes that are planned for Defender for Cloud. It describes planned modifications to the product that might affect things like your secure score or workflows.
-If you're looking for the latest release notes, you'll find them in the [What's new in Microsoft Defender for Cloud](release-notes.md).
+If you're looking for the latest release notes, you can find them in the [What's new in Microsoft Defender for Cloud](release-notes.md).
## Planned changes
If you're looking for the latest release notes, you'll find them in the [What's
| [Release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM](#release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-cspm) | May 2023 | |[Renaming container recommendations powered by Qualys](#renaming-container-recommendations-powered-by-qualys) | May 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 |
+| [Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM](#replacing-agent-based-discovery-with-agentless-discovery-for-containers-capabilities-in-defender-cspm) | June 2023
### Deprecation of legacy compliance standards across cloud environments
If you don't have an instance of a DevOps organization onboarded more than once
Customers will have until June 30, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps. For example, if Organization Contoso exists in both connectorA and connectorB, and connectorB was created after connectorA, then connectorA will be removed from Defender for DevOps.
+### Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM
+
+**Estimated date for change: June 2023**
+
+With Agentless Container Posture capabilities available in Defender CSPM, the agent-based discovery capabilities are set to be retired in June 2023. If you currently use container capabilities within Defender CSPM, please make sure that the [relevant extensions](concept-agentless-containers.md#onboard-agentless-containers-for-cspm) are enabled before this date to continue receiving container-related value of the new agentless capabilities such as container-related attack paths, insights, and inventory.
## Next steps
firewall-manager Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/policy-overview.md
NAT rule collections aren't inherited because they're specific to a given firewa
With inheritance, any changes to the parent policy are automatically applied down to associated firewall child policies.
+## Built-in high availability
+High availability is built in, so there's nothing you need to configure.
+
+Azure Firewall Policy is replicated to a paired Azure region. For example, if one Azure region goes down, Azure Firewall policy becomes active in the paired Azure region. The paired region is automatically selected based on the region where the policy is created. For more information, see [Cross-region replication in Azure: Business continuity and disaster recovery](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies).
## Pricing
hdinsight Find Host Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/find-host-name.md
Here are some examples of how to retrieve the FQDN for the nodes in the cluster.
The following example uses [jq](https://stedolan.github.io/jq/) or [ConvertFrom-Json](/powershell/module/microsoft.powershell.utility/convertfrom-json) to parse the JSON response document and display only the host names. ```bash
-export password=''
-export clusterName=''
-curl -u admin:$password -sS -G "https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/hosts" \
+export PASSWORD=''
+export CLUSTER_NAME=''
+curl -u admin:$PASSWORD -sS -G "https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/hosts" \
| jq -r '.items[].Hosts.host_name' ```
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-cli.md
The steps in this document walk-through creating a HDInsight 4.0 cluster using t
export clusterSizeInNodes=1 export clusterVersion=4.0 export clusterType=hadoop
- export componentVersion=Hadoop=3.1.0
+ export componentVersion=Hadoop=3.1
``` 3. [Create the resource group](/cli/azure/group#az-group-create) by entering the command below:
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
Last updated 02/28/2023 - # Azure HDInsight release notes This article provides information about the **most recent** Azure HDInsight release updates. For information on earlier releases, see [HDInsight Release Notes Archive](hdinsight-release-notes-archive.md).
This article provides information about the **most recent** Azure HDInsight rele
## Summary Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure.
-[Subscribe to our release notes](./subscribe-to-hdi-release-notes-repo.md) and watch releases on [this GitHub repository](https://github.com/hdinsight/release-notes/releases).
+Subscribe to the [HDInsight Release Notes](./subscribe-to-hdi-release-notes-repo.md) for up-to-date information on HDInsight and all HDInsight versions. 
+
+To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDInsight Releases](https://github.com/Azure/HDInsight/releases).
## Release date: February 28, 2023
hdinsight Subscribe To Hdi Release Notes Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/subscribe-to-hdi-release-notes-repo.md
Title: Subscribe to GitHub release notes repo
description: Learn how to subscribe to GitHub release notes repo Previously updated : 12/29/2022 Last updated : 05/03/2023 # Subscribe to HDInsight release notes GitHub repo
Learn how to subscribe to HDInsight release notes GitHub repo to get email notif
## Prerequisites
-* You should have a valid GitHub account to subscribe to this Release Notes notification. For more information on GitHub, [see here](https://github.com).
+* You should have a valid GitHub account to subscribe to this HDInsight Release Notes notification. For more information on GitHub, [see here](https://github.com).
**Steps to subscribe to HDInsight release notes GitHub repo**
-1. Go to [GitHub repository](https://github.com/hdinsight/release-notes/releases).
+1. Go to [HDInsight GitHub repository](https://github.com/Azure/HDInsight/releases).
1. Click **watch** and then **Custom** 1. Select **Releases** and click **Apply**
healthcare-apis Deploy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-arm-template.md
Previously updated : 04/28/2023 Last updated : 05/03/2023
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
* **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (the value could be the same or different region than your resource group).
- * **Device Mapping** - Don't change the default values for this quickstart.
+ * **Device Mapping** - Leave the default values for this quickstart.
- * **Destination Mapping** - Don't change the default values for this quickstart.
+ * **Destination Mapping** - Leave the default values for this quickstart.
- :::image type="content" source="media\deploy-arm-template\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\deploy-arm-template\iot-deploy-quickstart-options.png":::
+ :::image type="content" source="media\deploy-arm-template\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the MedTech service." lightbox="media\deploy-arm-template\iot-deploy-quickstart-options.png":::
2. To validate your configuration, select **Review + create**.
When deployment is completed, the following resources and access roles are creat
* Health Data Services Fast Healthcare Interoperability Resources FHIR service.
-* Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
+* Health Data Services MedTech service with the [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) enabled and granted the following access roles:
- * For the event hub, the **Azure Event Hubs Data Receiver** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub.
+ * For the event hub, the **Azure Event Hubs Data Receiver** access role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub.
- * For the FHIR service, the **FHIR Data Writer** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
+ * For the FHIR service, the **FHIR Data Writer** access role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
> [!IMPORTANT] > In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A patient resource and a device resource are created for each device that sends data to your FHIR service.
In this quickstart, you learned how to deploy the MedTech service in the Azure p
To learn about other methods for deploying the MedTech service, see > [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+> [Choose a deployment method for the MedTech service](deploy-choose-method.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
Previously updated : 04/28/2023 Last updated : 05/03/2023
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
To learn how to get an Azure AD user object ID, see [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id). The user object ID that's used in this tutorial is only an example. If you use this option, use your own user object ID or the object ID of another person who you want to be able to access the FHIR service.
- - **Device mapping**: Don't change the default values for this tutorial. The mappings are set in the template to send a device message to your IoT hub later in the tutorial.
+ - **Device mapping**: Leave the default values for this tutorial.
- - **Destination mapping**: Don't change the default values for this tutorial. The mappings are set in the template to send a device message to your IoT hub later in the tutorial.
+ - **Destination mapping**: Leave the default values for this tutorial.
:::image type="content" source="media\device-messages-through-iot-hub\deploy-template-options.png" alt-text="Screenshot that shows deployment options for the MedTech service for Health Data Services in the Azure portal." lightbox="media\device-messages-through-iot-hub\deploy-template-options.png":::
When deployment is completed, the following resources and access roles are creat
* Health Data Services FHIR service.
-* Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
+* Health Data Services MedTech service with the [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) enabled and granted the following access roles:
- * For the event hub, the **Azure Event Hubs Data Receiver** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub.
+ * For the event hub, the **Azure Event Hubs Data Receiver** access role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub.
- * For the FHIR service, the **FHIR Data Writer** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
+ * For the FHIR service, the **FHIR Data Writer** access role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
* Conforming and valid MedTech service [device](overview-of-device-mapping.md) and [FHIR destination mappings](overview-of-fhir-destination-mapping.md). **Resolution type** is set to **Create**.
With your resources successfully deployed, you next connect to your IoT hub, cre
* Read the IoT hub-routed test message from the event hub. * Transform the test message into five FHIR Observations.
-* Persist the FHIR Observations to your FHIR service.
+* Persist the FHIR Observations into your FHIR service.
You complete the steps by using Visual Studio Code with the Azure IoT Hub extension:
In this tutorial, you deployed an ARM template in the Azure portal, connected to
To learn about other methods for deploying the MedTech service, see > [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+> [Choose a deployment method for the MedTech service](deploy-choose-method.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
lab-services Classroom Labs Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-concepts.md
The quota applies to a lab for each lab user individually, for the entire durati
A lab can use either quota time, [scheduled time](#schedule), or a combination of both.
+## Advanced networking
+
+With lab plans, you have more control over the virtual network for labs by using advanced networking. With advanced networking, you can connect to a virtual network.
+
+Use advanced networking to connect to on premise resources such as licensing servers and use user defined routes (UDRs). Some organizations also have advanced network requirements and configurations that they want to apply to labs. For example, network requirements can include a network traffic control, ports management, access to resources in an internal network, and more.
+
+Azure Lab Services advanced networking uses virtual network (VNET) injection to connect a lab plan to your virtual network. VNET injection replaces the [Azure Lab Services virtual network peering](how-to-connect-peer-virtual-network.md) that was used with lab accounts.
+
+Learn more about how to [connect a lab plan to a virtual network](./how-to-connect-vnet-injection.md).
+ ## Next steps - [Create the resources to get started](./quick-create-resources.md)
lab-services Classroom Labs Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md
Lab users connect to their lab virtual machine through a load balancer. Lab vir
Inbound rules on the load balancer forward the connection, depending on the operating system, to either port 22 (SSH) or port 3389 (RDP) of the lab virtual machine. A network security group (NSG) blocks external traffic to any other port.
-If you configured the lab to use [advanced networking](how-to-connect-vnet-injection.md), then each lab uses the subnet that was connected to the lab plan and delegated to Azure Lab Services. In this case, you're responsible for creating a [network security group with an inbound security rule to allow RDP and SSH traffic](how-to-connect-vnet-injection.md#associate-delegated-subnet-with-nsg) to the lab virtual machines.
+If the lab is using [advanced networking](how-to-connect-vnet-injection.md), then each lab is using the same subnet that was delegated to Azure Lab Services and connected to the lab plan. You're also responsible for creating an [NSG with an inbound security rule to allow RDP and SSH traffic](how-to-connect-vnet-injection.md#associate-the-subnet-with-the-network-security-group) so lab users can connect to their VMs.
## Access control to the lab virtual machines
lab-services How To Connect Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-vnet-injection.md
Title: Connect to your virtual network in Azure Lab Services | Microsoft Docs
-description: Learn how to connect a lab to one of your networks.
+ Title: Connect to a virtual network
+
+description: Learn how to connect a lab plan in Azure Lab Services to a virtual network with advanced networking. Advanced networking uses VNET injection.
++++ Previously updated : 07/04/2022- Last updated : 04/25/2023
-# Use advanced networking (virtual network injection) to connect to your virtual network in Azure Lab Services
+# Connect a virtual network to a lab plan with advanced networking in Azure Lab Services
[!INCLUDE [preview focused article](./includes/lab-services-new-update-focused-article.md)]
-This article provides information about connecting a lab plan to your virtual network.
+This article describes how to connect a lab plan to a virtual network in Azure Lab Services. With lab plans, you have more control over the virtual network for labs by using advanced networking. You can connect to on premise resources such as licensing servers and use user defined routes (UDRs).
-Some organizations have advanced network requirements and configurations that they want to apply to labs. For example, network requirements can include a network traffic control, ports management, access to resources in an internal network, etc. Certain on-premises networks are connected to Azure Virtual Network either through [ExpressRoute](../expressroute/expressroute-introduction.md) or [Virtual Network Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). These services must be set up outside of Azure Lab Services. To learn more about connecting an on-premises network to Azure using ExpressRoute, see [ExpressRoute overview](../expressroute/expressroute-introduction.md). For on-premises connectivity using a Virtual Network Gateway, the gateway, specified virtual network, network security group, and the lab plan all must be in the same region.
+Some organizations have advanced network requirements and configurations that they want to apply to labs. For example, network requirements can include a network traffic control, ports management, access to resources in an internal network, and more. Certain on-premises networks are connected to Azure Virtual Network either through [ExpressRoute](../expressroute/expressroute-introduction.md) or [Virtual Network Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). These services must be set up outside of Azure Lab Services. To learn more about connecting an on-premises network to Azure using ExpressRoute, see [ExpressRoute overview](../expressroute/expressroute-introduction.md). For on-premises connectivity using a Virtual Network Gateway, the gateway, specified virtual network, network security group, and the lab plan all must be in the same region.
-In the Azure Lab Services [August 2022 Update](lab-services-whats-new.md), customers may take control of the network for the labs using virtual network (VNet) injection. You can now tell Lab Services which virtual network to use, and we'll inject the necessary resources into your network. With VNet injection, you can connect to on premise resources such as licensing servers and use user defined routes (UDRs). VNet injection replaces the [peering to your virtual network](how-to-connect-peer-virtual-network.md), as was done in previous versions.
+Azure Lab Services advanced networking uses virtual network (VNET) injection to connect a lab plan to your virtual network. VNET injection replaces the [Azure Lab Services virtual network peering](how-to-connect-peer-virtual-network.md) that was used with lab accounts.
> [!IMPORTANT]
-> Advanced networking (VNet injection) must be configured when creating a lab plan. It can't be added later.
+> You must configure advanced networking when you create a lab plan. You can't enable advanced networking at a later stage.
> [!NOTE]
-> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+> If your organization needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
## Prerequisites
-Before you configure advanced networking for your lab plan, complete the following tasks:
+- An Azure virtual network and subnet. If you don't have this resource, learn how to create a [virtual network](/azure/virtual-network/manage-virtual-network) and [subnet](/azure/virtual-network/virtual-network-manage-subnet).
-1. [Create a virtual network](../virtual-network/quick-create-portal.md). The virtual network must be in the same region as the lab plan.
-1. [Create a subnet](../virtual-network/virtual-network-manage-subnet.md) for the virtual network.
-1. [Delegate the subnet](#delegate-the-virtual-network-subnet-for-use-with-a-lab-plan) to **Microsoft.LabServices/labplans**.
-1. [Create a network security group (NSG)](../virtual-network/manage-network-security-group.md).
-1. [Create an inbound rule to allow traffic from SSH and RDP ports](../virtual-network/manage-network-security-group.md).
-1. [Associate the NSG to the delegated subnet](#associate-delegated-subnet-with-nsg).
+ > [!IMPORTANT]
+ > The virtual network and the lab plan must be in the same Azure region.
-Now that the prerequisites have been completed, you can [use advanced networking to connect your virtual network during lab plan creation](#connect-the-virtual-network-during-lab-plan-creation).
+## Delegate the virtual network subnet to lab plans
-## Delegate the virtual network subnet for use with a lab plan
+To use your virtual network subnet for advanced networking in Azure Lab Services, you need to [delegate the subnet](../virtual-network/subnet-delegation-overview.md) to Azure Lab Services lab plans.
-After you create a subnet for your virtual network, you must [delegate the subnet](../virtual-network/subnet-delegation-overview.md) for use with Azure Lab Services.
+You can delegate only one lab plan at a time for use with one subnet.
-Only one lab plan at a time can be delegated for use with one subnet.
+Follow these steps to delegate your subnet for use with a lab plan:
-1. Create a [virtual network](../virtual-network/manage-virtual-network.md) and [subnet](../virtual-network/virtual-network-manage-subnet.md).
-2. Open the **Subnets** page for your virtual network.
-3. Select the subnet you wish to delegate to Lab Services and open the property window for that subnet.
-4. For the **Delegate subnet to a service** property, select **Microsoft.LabServices/labplans**. Select **Save**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Go to your virtual network, and select **Subnets**.
+
+1. Select the subnet you wish to delegate to Azure Lab Services.
+
+1. In **Delegate subnet to a service**, select **Microsoft.LabServices/labplans**, and then select **Save**.
+
+ :::image type="content" source="./media/how-to-connect-vnet-injection/delegate-subnet-for-azure-lab-services.png" alt-text="Screenshot of the subnet properties page in the Azure portal, highlighting the Delegate subnet to a service setting.":::
+
+1. Verify the lab plan service appears in the **Delegated to** column.
+
+ :::image type="content" source="./media/how-to-connect-vnet-injection/delegated-subnet.png" alt-text="Screenshot of list of subnets for a virtual network in the Azure portal, highlighting the Delegated to columns." lightbox="./media/how-to-connect-vnet-injection/delegated-subnet.png":::
+
+## Configure a network security group
++
+An NSG is required when you connect your virtual network to Azure Lab Services. Specifically, configure the NSG to allow:
+
+- inbound RDP/SSH traffic from lab users' computer to the lab virtual machines
+- inbound RDP/SSH traffic to the template virtual machine
+
+After creating the NSG, you associate the NSG with the virtual network subnet.
+
+### Create a network security group to allow traffic
+
+Follow these steps to create an NSG and allow inbound RDP or SSH traffic:
+
+1. If you don't have a network security group already, follow these steps to [create a network security group (NSG)](/azure/virtual-network/manage-network-security-group).
+
+ Make sure to create the network security group in the same Azure region as the virtual network and lab plan.
+
+1. Create an inbound security rule to allow RDP and SSH traffic.
+
+ 1. Go to your network security group in the [Azure portal](https://portal.azure.com).
- :::image type="content" source="./media/how-to-connect-vnet-injection/delegate-subnet-for-azure-lab-services.png" alt-text="Screenshot of properties windows for subnet. The Delegate subnet to a service property is highlighted and set to Microsoft dot Lab Services forward slash lab plans.":::
-5. Verify the lab plan service appears in the **Delegated to** column.
+ 1. Select **Inbound security rules**, and then select **+ Add**.
- :::image type="content" source="./media/how-to-connect-vnet-injection/delegated-subnet.png" alt-text="Screenshot of list of subnets for a virtual network. The Delegated to and Security group columns are highlighted." lightbox="./media/how-to-connect-vnet-injection/delegated-subnet.png":::
+ 1. Enter the details for the new inbound security rule:
-## Associate delegated subnet with NSG
+ | Setting | Value |
+ | -- | -- |
+ | **Source** | Select *Any*. |
+ | **Source port ranges** | Enter *\**. |
+ | **Destination** | Select *IP Addresses*. |
+ | **Destination IP addresses/CIDR ranges** | Select the range of your virtual network subnet. |
+ | **Service** | Select *Custom*. |
+ | **Destination port ranges** | Enter *22, 3389*. Port 22 is for Secure Shell protocol (SSH). Port 3389 is for Remote Desktop Protocol (RDP). |
+ | **Protocol** | Select *Any*. |
+ | **Action** | Select *Allow*. |
+ | **Priority** | Enter *1000*. The priority must be higher than other *Deny* rules for RDP or SSH. |
+ | **Name** | Enter *AllowRdpSshForLabs*. |
+
+ :::image type="content" source="media/how-to-connect-vnet-injection/nsg-add-inbound-rule.png" lightbox="media/how-to-connect-vnet-injection/nsg-add-inbound-rule.png" alt-text="Screenshot of Add inbound rule window for network security group in the Azure portal.":::
-> [!WARNING]
-> An NSG with inbound rules for RDP and/or SSH is required to allow access to the template and lab VMs.
+ 1. Select **Add** to add the inbound security rule to the NSG.
-For connectivity to lab VMs, it's required to associate an NSG with the subnet delegated to Lab Services. We'll create an NSG, add an inbound rule to allow both SSH and RDP traffic, and then associate the NSG with the delegated subnet.
+ 1. Select **Refresh**. The new rule should show in the list of rules.
-1. [Create a network security group (NSG)](../virtual-network/manage-network-security-group.md), if not done already.
-2. Create an inbound security rule allowing RDP and SSH traffic.
- 1. Select **Inbound security rules** on the left menu.
- 2. Select **+ Add** from the top menu bar. Fill in the details for adding the inbound security rule as follows:
- 1. For **Source**, select **Any**.
- 2. For **Source port ranges**, select **\***.
- 3. For **Destination**, select **IP Addresses**.
- 4. For **Destination IP addresses/CIDR ranges**, select subnet range previously created subnet.
- 5. For **Service**, select **Custom**.
- 6. For **Destination port ranges**, enter **22, 3389**. Port 22 is for Secure Shell protocol (SSH). Port 3389 is for Remote Desktop Protocol (RDP).
- 7. For **Protocol**, select **Any**.
- 8. For **Action**, select **Allow**.
- 9. For **Priority**, select **1000**. Priority must be higher than other **Deny** rules for RDP and/or SSH.
- 10. For **Name**, enter **AllowRdpSshForLabs**.
- 11. Select **Add**.
+### Associate the subnet with the network security group
- :::image type="content" source="media/how-to-connect-vnet-injection/nsg-add-inbound-rule.png" lightbox="media/how-to-connect-vnet-injection/nsg-add-inbound-rule.png" alt-text="Screenshot of Add inbound rule window for Network security group.":::
- 3. Wait for the rule to be created.
- 4. Select **Refresh** on the menu bar. Our new rule will now show in the list of rules.
-3. Associate the NSG with the delegated subnet.
- 1. Select **Subnets** on the left menu.
- 1. Select **+ Associate** from the top menu bar.
- 1. On the **Associate subnet** page, do the following actions:
- 1. For **Virtual network**, select previously created virtual network.
- 2. For **Subnet**, select previously created subnet.
- 3. Select **OK**.
+To apply the network security group rules to traffic in the virtual network subnet, associate the NSG with the subnet.
- :::image type="content" source="media/how-to-connect-vnet-injection/associate-nsg-with-subnet.png" lightbox="media/how-to-connect-vnet-injection/associate-nsg-with-subnet.png" alt-text="Screenshot of Associate subnet page in the Azure portal.":::
+1. Go to your network security group, and select **Subnets**.
+
+1. Select **+ Associate** from the top menu bar.
+
+1. For **Virtual network**, select your virtual network.
+
+1. For **Subnet**, select your virtual network subnet.
+
+ :::image type="content" source="media/how-to-connect-vnet-injection/associate-nsg-with-subnet.png" lightbox="media/how-to-connect-vnet-injection/associate-nsg-with-subnet.png" alt-text="Screenshot of the Associate subnet page in the Azure portal.":::
+
+1. Select **OK** to associate the virtual network subnet with the network security group.
+
+Lab users and lab managers can now connect to their lab virtual machines or lab template by using RDP or SSH.
## Connect the virtual network during lab plan creation
+You can now create the lab plan and connect it to the virtual network. As a result, the template VM and lab VMs are injected in your virtual network.
+
+> [!IMPORTANT]
+> You must configure advanced networking when you create a lab plan. You can't enable advanced networking at a later stage.
+ 1. Sign in to the [Azure portal](https://portal.azure.com).+ 1. Select **Create a resource** in the upper left-hand corner of the Azure portal.+ 1. Search for **lab plan**. (**Lab plan** can also be found under the **DevOps** category.)
-1. Enter required information on the **Basics** tab of the **Create a lab plan** page. For more information, see [Create a lab plan with Azure Lab Services](quick-create-resources.md).
-1. From the **Basics** tab of the **Create a lab plan** page, select **Next: Networking** at the bottom of the page.
-1. Select **Enable advanced networking**.
- 1. For **Virtual network**, select an existing virtual network for the lab network. For a virtual network to appear in this list, it must be in the same region as the lab plan.
- 2. Specify an existing **subnet** for VMs in the lab. For subnet requirements, see [Delegate the virtual network subnet for use with a lab plan](#delegate-the-virtual-network-subnet-for-use-with-a-lab-plan).
+1. Enter the information on the **Basics** tab of the **Create a lab plan** page.
+
+ For more information, see [Create a lab plan with Azure Lab Services](quick-create-resources.md).
+
+1. Select the **Networking** tab, and then select **Enable advanced networking**.
- :::image type="content" source="./media/how-to-connect-vnet-injection/create-lab-plan-advanced-networking.png" alt-text="Screenshot of the Networking tab of the Create a lab plan wizard.":::
+1. For **Virtual network**, select your virtual network. For **Subnet**, select your virtual network subnet.
-Once you have a lab plan configured with advanced networking, all labs created with this lab plan use the specified subnet.
+ If your virtual network doesn't appear in the list, verify that the lab plan is in the same Azure region as the virtual network.
+
+ :::image type="content" source="./media/how-to-connect-vnet-injection/create-lab-plan-advanced-networking.png" alt-text="Screenshot of the Networking tab of the Create a lab plan wizard.":::
+
+1. Select **Review + Create** to create the lab plan with advanced networking.
+
+All labs you create for this lab plan can now use the specified subnet.
## Known issues -- Deleting your virtual network or subnet will cause the lab to stop working-- Changing the DNS label on the public IP will cause the **Connect** button for lab VMs to stop working.
+- Deleting your virtual network or subnet causes the lab to stop working
+- Changing the DNS label on the public IP causes the **Connect** button for lab VMs to stop working.
- Azure Firewall isn't currently supported. ## Next steps
-See the following articles:
- - As an admin, [attach a compute gallery to a lab plan](how-to-attach-detach-shared-image-gallery.md). - As an admin, [configure automatic shutdown settings for a lab plan](how-to-configure-auto-shutdown-lab-plans.md).-- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
+- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
description: Overview of gateway load balancer SKU for Azure Load Balancer.
Previously updated : 12/28/2021 Last updated : 04/20/2023 -+ # Gateway Load Balancer
Gateway Load Balancer consists of the following components:
* **Tunnel interfaces** - Gateway Load balancer backend pools have another component called the tunnel interfaces. The tunnel interface enables the appliances in the backend to ensure network flows are handled as expected. Each backend pool can have up to two tunnel interfaces. Tunnel interfaces can be either internal or external. For traffic coming to your backend pool, you should use the external type. For traffic going from your appliance to the application, you should use the internal type.
-* **Chain** - A Gateway Load Balancer can be referenced by a Standard Public Load Balancer frontend or a Standard Public IP configuration on a virtual machine. The addition of advanced networking capabilities in a specific sequence is known as service chaining. As a result, this reference is called a chain. In order to chain a Load Balancer frontend or Public IP configuration to a Gateway Load Balancer that is cross-subscription, users will need permission for the resource provider operation "Microsoft.Network/loadBalancers/frontendIPConfigurations/join/action". For cross-tenant chaining, the user will also need Guest access.
+* **Chain** - A Gateway Load Balancer can be referenced by a Standard Public Load Balancer frontend or a Standard Public IP configuration on a virtual machine. The addition of advanced networking capabilities in a specific sequence is known as service chaining. As a result, this reference is called a chain. A Cross tenant chain involves chaining a Load Balancer frontend or Public IP configuration to a Gateway Load Balancer that is in another subscription. For cross tenant chaining, users need:
+ * Permission for the resource provider operation `Microsoft.Network/loadBalancers/frontendIPConfigurations/join/action`.
+ * Guest access to the subscription of the Gateway Load Balancer.
## Pricing
load-balancer Instance Metadata Service Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/instance-metadata-service-load-balancer.md
Previously updated : 02/12/2021 Last updated : 04/20/2023 -+ # Retrieve load balancer information by using Azure Instance Metadata Service
-IMDS (Azure Instance Metadata Service) provides information about currently running virtual machine instances. The service is a REST API that's available at a well-known, non-routable IP address (169.254.169.254).
+IMDS (Azure Instance Metadata Service) provides information about currently running virtual machine instances. The service is a REST API that's available at a well-known, nonroutable IP address (169.254.169.254).
When you place virtual machine or virtual machine set instances behind an Azure Standard Load Balancer, you can use IMDS to retrieve metadata related to the load balancer and the instances.
load-balancer Load Balancer Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-common-deployment-errors.md
tags: top-support-issue
Previously updated : 11/22/2021 Last updated : 04/20/2023 + # Troubleshoot common Azure deployment errors with Azure Load Balancer
This article describes some common Azure Load Balancer deployment errors and pro
| Error code | Details and mitigation | | - | - | |DifferentSkuLoadBalancersAndPublicIPAddressNotAllowed| Both Public IP SKU and Load Balancer SKU must match. Ensure Azure Load Balancer and Public IP SKUs match. Standard SKU is recommended for production workloads. Learn more about the [differences in SKUs](./skus.md) |
-|DifferentSkuLoadBalancerAndPublicIPAddressNotAllowedInVMSS | Virtual machine scale sets default to Basic Load Balancers when SKU is unspecified or deployed without Standard Public IPs. Re-deploy virtual machine scale set with Standard Public IPs on the individual instances to ensure Standard Load Balancer is selected or simply select a Standard LB when deploying virtual machine scale set from the Azure portal. |
-|MaxAvailabilitySetsInLoadBalancerReached | The backend pool of a Load Balancer can contain a maximum of 150 availability sets. If you don't have availability sets explicitly defined for your VMs in the backend pool, each single VM goes into its own Availability Set. So deploying 150 standalone VMs would imply that it would have 150 Availability sets, thus hitting the limit. You can deploy an availability set and add additional VMs to it as a workaround. |
+|DifferentSkuLoadBalancerAndPublicIPAddressNotAllowedInVMSS | Virtual Machine Scale Sets default to Basic Load Balancers when SKU is unspecified or deployed without Standard Public IPs. Redeploy Virtual Machine Scale Set with Standard Public IPs on the individual instances to ensure Standard Load Balancer is selected or select a Standard LB when deploying Virtual Machine Scale Set from the Azure portal. |
+|MaxAvailabilitySetsInLoadBalancerReached | The backend pool of a Load Balancer can contain a maximum of 150 availability sets. If you don't have availability sets explicitly defined for your VMs in the backend pool, each single VM goes into its own Availability Set. So deploying 150 standalone VMs would imply that it would have 150 Availability sets, thus hitting the limit. You can deploy an availability set and add more VMs to it as a workaround. |
|NetworkInterfaceAndLoadBalancerAreInDifferentAvailabilitySets | For Basic Sku load balancer, network interface and load balancer have to be in the same availability set. |
-|RulesOfSameLoadBalancerTypeUseSameBackendPortProtocolAndIPConfig| You cannot have more than one rule on a given load balancer type (internal, public) with same backend port and protocol referenced by same virtual machine scale set. Update your rule to change this duplicate rule creation. |
-|RulesOfSameLoadBalancerTypeUseSameBackendPortProtocolAndVmssIPConfig| You cannot have more than one rule on a given load balancer type (internal, public) with same backend port and protocol referenced by same virtual machine scale set. Update your rule parameters to change this duplicate rule creation. |
-|AnotherInternalLoadBalancerExists| You can have only one Load Balancer of type internal reference the same set of VMs/network interfaces in the backend of the Load Balancer. Update your deployment to ensure you are creating only one Load Balancer of the same type. |
-|CannotUseInactiveHealthProbe| You cannot have a probe that's not used by any rule configured for virtual machine scale set health. Ensure that the probe that is set up is being actively used. |
-|VMScaleSetCannotUseMultipleLoadBalancersOfSameType| You cannot have multiple Load Balancers of the same type (internal, public). You can have a maximum of one internal and one public Load Balancer. |
-|VMScaleSetCannotReferenceLoadbalancerWhenLargeScaleOrCrossAZ | Basic Load Balancer is not supported for multiple-placement group virtual machine scale sets or cross-availability zone virtual machine scale set. Use Standard Load Balancer instead. |
+|RulesOfSameLoadBalancerTypeUseSameBackendPortProtocolAndIPConfig| You can't have more than one rule on a given load balancer type (internal, public) with same backend port and protocol referenced by same Virtual Machine Scale Set. Update your rule to change this duplicate rule creation. |
+|RulesOfSameLoadBalancerTypeUseSameBackendPortProtocolAndVmssIPConfig| You can't have more than one rule on a given load balancer type (internal, public) with same backend port and protocol referenced by same Virtual Machine Scale Set. Update your rule parameters to change this duplicate rule creation. |
+|AnotherInternalLoadBalancerExists| You can have only one Load Balancer of type internal reference the same set of VMs/network interfaces in the backend of the Load Balancer. Update your deployment to ensure you're creating only one Load Balancer of the same type. |
+|CannotUseInactiveHealthProbe| You can't have a probe that's not used by any rule configured for Virtual Machine Scale Set health. Ensure that the probe that is set up is being actively used. |
+|VMScaleSetCannotUseMultipleLoadBalancersOfSameType| You can't have multiple Load Balancers of the same type (internal, public). You can have a maximum of one internal and one public Load Balancer. |
+|VMScaleSetCannotReferenceLoadbalancerWhenLargeScaleOrCrossAZ | Basic Load Balancer isn't supported for multiple-placement group Virtual Machine Scale Sets or cross-availability zone Virtual Machine Scale Set. Use Standard Load Balancer instead. |
|MarketplacePurchaseEligibilityFailed | Switch to the correct Administrative account to enable purchases due to subscription being an EA Subscription. You can read more [here](../marketplace/marketplace-faq-publisher-guide.yml#what-could-block-a-customer-from-completing-a-purchase-). | |ResourceDeploymentFailure| If your load balancer is in a failed state, follow these steps to bring it back from the failed state:<ol><li>Go to https://resources.azure.com, and sign in with your Azure portal credentials.</li><li>Select **Read/Write**.</li><li>On the left, expand **Subscriptions**, and then expand the Subscription with the Load Balancer to update.</li><li>Expand **ResourceGroups**, and then expand the resource group with the Load Balancer to update.</li><li>Select **Microsoft.Network** > **LoadBalancers**, and then select the Load Balancer to update, **LoadBalancer_1**.</li><li>On the display page for **LoadBalancer_1**, select **GET** > **Edit**.</li><li>Update the **ProvisioningState** value from **Failed** to **Succeeded**.</li><li>Select **PUT**.</li></ol>|
-|LoadBalancerWithoutFrontendIPCantHaveChildResources | A Load Balancer resource that has no frontend IP configurations, cannot have associated child resources or components associated to it. In order to mitigate this error, add a frontend IP configuration and then add the resources you are trying to add. |
-| LoadBalancerRuleCountLimitReachedForNic | A backend pool member's network interface (virtual machine, virtual machine scale set) cannot be associated to more than 300 rules. Reduce the number of rules or leverage another Load Balancer. This limit is documented on the [Load Balancer limits page](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer).
-| LoadBalancerInUseByVirtualMachineScaleSet | The Load Balancer resource is in use by a virtual machine scale set and cannot be deleted. Use the ARM ID provided in the error message to search for the virtual machine scale set in order to delete it. |
+|LoadBalancerWithoutFrontendIPCantHaveChildResources | A Load Balancer resource that has no frontend IP configurations, can't have associated child resources or components associated to it. In order to mitigate this error, add a frontend IP configuration and then add the resources you're trying to add. |
+| LoadBalancerRuleCountLimitReachedForNic | A backend pool member's network interface (virtual machine, Virtual Machine Scale Set) can't be associated to more than 300 rules. Reduce the number of rules or use another Load Balancer. This limit is documented on the [Load Balancer limits page](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer).
+| LoadBalancerInUseByVirtualMachineScaleSet | The Load Balancer resource is in use by a Virtual Machine Scale Set and can't be deleted. Use the Azure Resource Manager ID provided in the error message to search for the Virtual Machine Scale Set in order to delete it. |
## Next steps
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
description: Learn about the different types of health probes and configuration
Previously updated : 02/10/2022 Last updated : 05/04/2023 -+ # Azure Load Balancer health probes
-Azure Load Balancer rules require a health probe to detect the endpoint status. The configuration of the health probe and probe responses determines which backend pool instances will receive new connections. Use health probes to detect the failure of an application. Generate a custom response to a health probe. Use the health probe for flow control to manage load or planned downtime. When a health probe fails, the load balancer will stop sending new connections to the respective unhealthy instance. Outbound connectivity isn't affected, only inbound.
+Azure Load Balancer rules require a health probe to detect the endpoint status. The configuration of the health probe and probe responses determines which backend pool instances receive new connections. Use health probes to detect the failure of an application. Generate a custom response to a health probe. Use the health probe for flow control to manage load or planned downtime. When a health probe fails, the load balancer stops sending new connections to the respective unhealthy instance. Outbound connectivity isn't affected, only inbound.
Health probes support multiple protocols. The availability of a specific health probe protocol varies by Load Balancer SKU. Additionally, the behavior of the service varies by Load Balancer SKU as shown in this table:
Health probe configuration consists of the following elements:
## Application signal, detection of the signal, and Load Balancer reaction
-The interval value determines how frequently the health probe will probe for a response from your backend pool instances. If the health probe fails, it will immediately mark your backend pool instances as unhealthy. On the next healthy probe up, the health probe will immediately mark your backend pool instances as healthy.
+The interval value determines how frequently the health probe checks for a response from your backend pool instances. If the health probe fails, your backend pool instances are immediately marked as unhealthy. On the next healthy probe up, the health probe marks your backend pool instances as healthy.
For example, a health probe set to five seconds. The time at which a probe is sent isn't synchronized with when your application may change state. The total time it takes for your health probe to reflect your application state can fall into one of the two following scenarios:
For example, a health probe set to five seconds. The time at which a probe is se
2. If your application produces a time-out response just after the next probe arrives, the detection of the events won't begin until the probe arrives and times out, plus another 5 seconds. You can assume the detection to take just under 10 seconds.
-For this example, once detection has occurred, the platform will take a small amount of time to react to the change.
+For this example, once detection has occurred, the platform takes a small amount of time to react to the change.
The reaction depends on:
The reaction depends on:
* When the next health probe is sent * When the detection has been communicated across the platform
-Assume the reaction to a time-out response will take a minimum of 5 seconds and a maximum of 10 seconds to react to the change.
+Assume the reaction to a time-out response takes a minimum of 5 seconds and a maximum of 10 seconds to react to the change.
This example is provided to illustrate what is taking place. It's not possible to forecast an exact duration beyond the guidance in the example.
Any backend endpoint that has achieved a healthy state is eligible for receiving
### TCP connections
-New TCP connections will succeed to remaining healthy backend endpoint.
+New TCP connections succeed to remaining healthy backend endpoint.
-If a backend endpoint's health probe fails, established TCP connections to this backend endpoint continue. However, if a backend pool only contains a single endpoint, then existing flows will terminate.
+If a backend endpoint's health probe fails, established TCP connections to this backend endpoint continue. However, if a backend pool only contains a single endpoint, then existing flows terminate.
-If all probes for all instances in a backend pool fail, no new flows will be sent to the backend pool. Standard Load Balancer will permit established TCP flows to continue given that a backend pool has more than one backend endpoint. Basic Load Balancer will terminate all existing TCP flows to the backend pool.
+If all probes for all instances in a backend pool fail, no new flows are sent to the backend pool. Standard Load Balancer allows established TCP flows to continue given that a backend pool has more than one backend endpoint. Basic Load Balancer terminates all existing TCP flows to the backend pool.
-Load Balancer is a pass through service. Load Balancer doesn't terminate TCP connections. The flow is always between the client and the VM's guest OS and application. A pool with all probes down results in a frontend that won't respond to TCP connection open attempts. There isn't a healthy backend endpoint to receive the flow and respond with an acknowledgment.
+Load Balancer is a pass through service. Load Balancer doesn't terminate TCP connections. The flow is always between the client and the VM's guest OS and application. If a pool has all probes down, the frontend doesn't respond to TCP connection open attempts as there's no healthy backend endpoint to receive the flow and respond with an acknowledgment.
### UDP datagrams
-UDP datagrams will be delivered to healthy backend endpoints.
+UDP datagrams are delivered to healthy backend endpoints.
-UDP is connection-less and there's no flow state tracked for UDP. If any backend endpoint's health probe fails, existing UDP flows will move to another healthy instance in the backend pool.
+UDP is connection-less and there's no flow state tracked for UDP. If any backend endpoint's health probe fails, existing UDP flows move to another healthy instance in the backend pool.
-If all probes for all instances in a backend pool fail, existing UDP flows will terminate for basic and standard load balancers.
+If all probes for all instances in a backend pool fail, existing UDP flows terminate for basic and standard load balancers.
## Probe source IP address
-Load Balancer uses a distributed probing service for its internal health model. The probing service resides on each host where VMs and can be programmed on-demand to generate health probes per the customer's configuration. The health probe traffic is directly between the probing service that generates the health probe and the customer VM. All IPv4 Load Balancer health probes originate from the IP address 168.63.129.16 as their source. (Note that IPv6 probes use a [link-local address](https://www.wikipedia.org/wiki/Link-local_address) as their source.)
+Load Balancer uses a distributed probing service for its internal health model. The probing service resides on each host where VMs and can be programmed on-demand to generate health probes per the customer's configuration. The health probe traffic is directly between the probing service that generates the health probe and the customer VM. All IPv4 Load Balancer health probes originate from the IP address 168.63.129.16 as their source. IPv6 probes use a [link-local address](https://www.wikipedia.org/wiki/Link-local_address) as their source.
The **AzureLoadBalancer** service tag identifies this source IP address in your [network security groups](../virtual-network/network-security-groups-overview.md) and permits health probe traffic by default.
In addition to load balancer health probes, the [following operations use this I
* When you design the health model for your application, probe a port on a backend endpoint that reflects the health of the instance **and** the application service. The application port and the probe port aren't required to be the same. In some scenarios, it may be desirable for the probe port to be different than the port your application uses.
-* It can be useful for your application to generate a health probe response, and signal the load balancer whether your instance should receive new connections. You can manipulate the probe response to throttle delivery of new connections to an instance by failing the health probe. You can prepare for maintenance of your application and initiate draining of connections to your application. A [probe down](#probe-down-behavior) signal will always allow TCP flows to continue until idle timeout or connection closure in a Standard Load Balancer.
+* It can be useful for your application to generate a health probe response, and signal the load balancer whether your instance should receive new connections. You can manipulate the probe response to throttle delivery of new connections to an instance by failing the health probe. You can prepare for maintenance of your application and initiate draining of connections to your application. A [probe down](#probe-down-behavior) signal always allows TCP flows to continue until idle timeout or connection closure in a Standard Load Balancer.
* For a UDP load-balanced application, generate a custom health probe signal from the backend endpoint. Use either TCP, HTTP, or HTTPS for the health probe that matches the corresponding listener. * [HA Ports load-balancing rule](load-balancer-ha-ports-overview.md) with [Standard Load Balancer](./load-balancer-overview.md). All ports are load balanced and a single health probe response must reflect the status of the entire instance.
-* Don't translate or proxy a health probe through the instance that receives the health probe to another instance in your virtual network. This configuration can lead to cascading failures in your scenario. For example: A set of third-party appliances is deployed in the backend pool of a load balancer to provide scale and redundancy for the appliances. The health probe is configured to probe a port that the third-party appliance proxies or translates to other virtual machines behind the appliance. If you probe the same port used to translate or proxy requests to the other virtual machines behind the appliance, any probe response from a single virtual machine will mark down the appliance. This configuration can lead to a cascading failure of the application. The trigger can be an intermittent probe failure that will cause the load balancer to mark down the appliance instance. This action can disable your application. Probe the health of the appliance itself. The selection of the probe to determine the health signal is an important consideration for network virtual appliances (NVA) scenarios. Consult your application vendor for the appropriate health signal is for such scenarios.
+* Don't translate or proxy a health probe through the instance that receives the health probe to another instance in your virtual network. This configuration can lead to cascading failures in your scenario. For example: A set of third-party appliances is deployed in the backend pool of a load balancer to provide scale and redundancy for the appliances. The health probe is configured to probe a port that the third-party appliance proxies or translates to other virtual machines behind the appliance. If you probe the same port used to translate or proxy requests to the other virtual machines behind the appliance, any probe response from a single virtual machine marks down the appliance. This configuration can lead to a cascading failure of the application. The trigger can be an intermittent probe failure that causes the load balancer to mark down the appliance instance. This action can disable your application. Probe the health of the appliance itself. The selection of the probe to determine the health signal is an important consideration for network virtual appliances (NVA) scenarios. Consult your application vendor for the appropriate health signal is for such scenarios.
-* If you don't allow the [source IP](#probe-source-ip-address) of the probe in your firewall policies, the health probe will fail as it is unable to reach your instance. In turn, Load Balancer will mark down your instance due to the health probe failure. This misconfiguration can cause your load balanced application scenario to fail.
+* If you don't allow the [source IP](#probe-source-ip-address) of the probe in your firewall policies, the health probe fails as it is unable to reach your instance. In turn, Load Balancer marks down your instance due to the health probe failure. This misconfiguration can cause your load balanced application scenario to fail.
* For Load Balancer's health probe to mark up your instance, you **must** allow this IP address in any Azure [network security groups](../virtual-network/network-security-groups-overview.md) and local firewall policies. By default, every network security group includes the [service tag](../virtual-network/network-security-groups-overview.md#service-tags) AzureLoadBalancer to permit health probe traffic. * To test a health probe failure or mark down an individual instance, use a [network security group](../virtual-network/network-security-groups-overview.md) to explicitly block the health probe. Create an NSG rule to block the destination port or [source IP](#probe-source-ip-address) to simulate the failure of a probe.
-* Don't configure your virtual network with the Microsoft owned IP address range that contains 168.63.129.16. The configuration will collide with the IP address of the health probe and can cause your scenario to fail.
+* Don't configure your virtual network with the Microsoft owned IP address range that contains 168.63.129.16. The configuration collides with the IP address of the health probe and can cause your scenario to fail.
* If you have multiple interfaces configured in your virtual machine, ensure you respond to the probe on the interface you received it on. You may need to source network address translate this address in the VM on a per interface basis.
-* Don't enable [TCP timestamps](https://tools.ietf.org/html/rfc1323). TCP timestamps can cause health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack. The dropped packets can cause the load balancer to mark the endpoint as down. TCP timestamps are routinely enabled by default on security hardened VM images and must be disabled.
+* Don't enable [TCP timestamps](https://tools.ietf.org/html/rfc1323). TCP timestamps can cause health probes to fail due to the VM's guest OS TCP stack dropping TCP packets. The dropped packets can cause the load balancer to mark the endpoint as down. TCP timestamps are routinely enabled by default on security hardened VM images and must be disabled.
## Monitoring
-Public and internal [Standard Load Balancer](./load-balancer-overview.md) expose per endpoint and backend endpoint health probe status through [Azure Monitor](./monitor-load-balancer.md). These metrics can be consumed by other Azure services or partner applications.
+Public and internal [Standard Load Balancer](./load-balancer-overview.md) expose per endpoint and backend endpoint health probe status through [Azure Monitor](./monitor-load-balancer.md). Other Azure services or partner applications can consume these metrics.
-Azure Monitor logs aren't available for both public and internal Basic Load Balancers.
+Azure Monitor logs aren't available for use with Basic Load Balancer.
## Limitations * HTTPS probes don't support mutual authentication with a client certificate.
-* You should assume health probes will fail when TCP timestamps are enabled.
+* You should assume health probes fail when TCP timestamps are enabled.
* A Basic SKU load balancer health probe isn't supported with a virtual machine scale set.
load-balancer Load Balancer Ha Ports Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ha-ports-overview.md
Previously updated : 04/14/2022 Last updated : 05/03/2023 -+ # High availability ports overview
The following diagram presents a hub-and-spoke virtual network deployment. The s
You can also use HA ports for applications that require load balancing of large numbers of ports. You can simplify these scenarios by using an internal [standard load balancer](./load-balancer-overview.md) with HA ports. A single load-balancing rule replaces multiple individual load-balancing rules, one for each port.
-## Region availability
-
-The HA ports feature is available in all the global Azure regions.
- ## Supported configurations
-### A single, non-floating IP (non-Direct Server Return) HA-ports configuration on an internal standard load balancer
+### A single, nonfloating IP (non-Direct Server Return) HA-ports configuration on an internal standard load balancer
This configuration is a basic HA ports configuration. Use the following steps to configure an HA ports load-balancing rule on a single frontend IP address:
However, you can configure a public Standard Load Balancer for the back-end inst
You can similarly configure your load balancer to use a load-balancing rule with **HA Port** with a single front end by setting the **Floating IP** to **Enabled**.
-With this configuration, you can add more floating IP load-balancing rules and/or a public load balancer. However, you can't use a non-floating IP, HA-ports load-balancing configuration on top of this configuration.
+With this configuration, you can add more floating IP load-balancing rules and/or a public load balancer. However, you can't use a nonfloating IP, HA-ports load-balancing configuration on top of this configuration.
### Multiple HA-ports configurations on an internal standard load balancer
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
Previously updated : 05/07/2020 Last updated : 05/03/2023 -+ # Load Balancer and Availability Zones Azure Load Balancer supports availability zones scenarios. You can use Standard Load Balancer to increase availability throughout your scenario by aligning resources with, and distribution across zones. Review this document to understand these concepts and fundamental scenario design guidance.
-A Load Balancer can either be **zone redundant, zonal,** or **non-zonal**. To configure the zone-related properties (mentioned above) for your load balancer, select the appropriate type of frontend needed.
+A Load Balancer can either be **zone redundant, zonal,** or **non-zonal**. To configure the zone-related properties for your load balancer, select the appropriate type of frontend needed.
## Zone redundant
-In a region with Availability Zones, a Standard Load Balancer can be zone-redundant. This traffic is served by a single IP address.
-
-A single frontend IP address will survive zone failure. The frontend IP may be used to reach all (non-impacted) backend pool members no matter the zone. One or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
+In a region with Availability Zones, a Standard Load Balancer can be zone-redundant with traffic served by a single IP address. A single frontend IP address survives zone failure. The frontend IP may be used to reach all (nonimpacted) backend pool members no matter the zone. One or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
The frontend's IP address is served simultaneously by multiple independent infrastructure deployments in multiple availability zones. Any retries or reestablishment will succeed in other zones not affected by the zone failure.
The frontend's IP address is served simultaneously by multiple independent infra
## Zonal
-You can choose to have a frontend guaranteed to a single zone, which is known as a *zonal*. This scenario means any inbound or outbound flow is served by a single zone in a region. Your frontend shares fate with the health of the zone. The data path is unaffected by failures in zones other than where it was guaranteed. You can use zonal frontends to expose an IP address per Availability Zone.
+You can choose to have a frontend guaranteed to a single zone, which is known as a *zonal*. With this scenario, a single zone in a region serves all inbound or outbound flow. Your frontend shares fate with the health of the zone. The data path is unaffected by failures in zones other than where it was guaranteed. You can use zonal frontends to expose an IP address per Availability Zone.
Additionally, the use of zonal frontends directly for load-balanced endpoints within each zone is supported. You can use this configuration to expose per zone load-balanced endpoints to individually monitor each zone. For public endpoints, you can integrate them with a DNS load-balancing product like [Traffic Manager](../traffic-manager/traffic-manager-overview.md) and use a single DNS name.
Using multiple frontends allow you to load balance traffic on more than one port
### Transition between regional zonal models
-In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs would remain non-zonal like IPs used for load balancer frontends. To ensure your architecture can take advantage of the new zones, creation of new frontend IPs is recommended. Once created, you can replace the existing non-zonal frontend with a new zone-redundant frontend using the method described [here](../virtual-network/ip-services/configure-public-ip-load-balancer.md#change-or-remove-public-ip-address). All existing load balancing and NAT rules will transition to the new frontend.
+In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs would remain non-zonal like IPs used for load balancer frontends. To ensure your architecture can take advantage of the new zones, creation of new frontend IPs is recommended. Once created, you can replace the existing non-zonal frontend with a new zone-redundant frontend using the method described [here](../virtual-network/ip-services/configure-public-ip-load-balancer.md#change-or-remove-public-ip-address). All existing load balancing and NAT rules transition to the new frontend.
### Control vs data plane implications
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
description: Use the available metrics, alerts, and resource health information
Previously updated : 01/26/2022 Last updated : 05/03/2022 -+ # Standard load balancer diagnostics with metrics, alerts, and resource health
To get the data path availability for your standard load balancer resources:
1. Make sure the correct load balancer resource is selected.
-2. In the **Metric** drop-down list, select **Data Path Availability**.
+1. In the **Metric** drop-down list, select **Data Path Availability**.
-3. In the **Aggregation** drop-down list, select **Avg**.
+1. In the **Aggregation** drop-down list, select **Avg**.
-4. Additionally, add a filter on the frontend IP address or frontend port as the dimension with the required front-end IP address or front-end port, and then group them by the selected dimension.
+1. Additionally, add a filter on the frontend IP address or frontend port as the dimension with the required front-end IP address or front-end port. Then group them by the selected dimension.
:::image type="content" source="./media/load-balancer-standard-diagnostics/lbmetrics-vipprobing.png" alt-text="Load balancer frontend probing details.":::
To get the health probe status for your standard load balancer resources:
Health probes fail for the following reasons: -- You configure a health probe to a port that isnΓÇÖt listening or not responding or is using the wrong protocol. If your service is using direct server return or floating IP rules, make sure that the service is listening on the IP address of the NIC's IP configuration and not just on the loopback that's configured with the front-end IP address.
+- You configure a health probe to a port that isnΓÇÖt listening or not responding or is using the wrong protocol. If your service is using direct server return or floating IP rules, verify the service is listening on the IP address of the NIC's IP configuration and the loopback that's configured with the front-end IP address.
-- Your probe isnΓÇÖt permitted by the Network Security Group, the VM's guest OS firewall, or the application layer filters.
+- Your Network Security Group, the VM's guest OS firewall, or the application layer filters don't allow the health probe traffic.
Use **Average** as the aggregation for most scenarios.
To get SNAT connection statistics:
<summary>Expand</summary>
-The used SNAT ports metric tracks how many SNAT ports are being consumed to maintain outbound flows. This indicates how many unique flows are established between an internet source and a backend VM or virtual machine scale set that is behind a load balancer and doesnΓÇÖt have a public IP address. By comparing the number of SNAT ports youΓÇÖre using with the Allocated SNAT Ports metric, you can determine if your service is experiencing or at risk of SNAT exhaustion and resulting outbound flow failure.
+The used SNAT ports metric tracks how many SNAT ports are being consumed to maintain outbound flows. This metric indicates how many unique flows are established between an internet source and a backend VM or virtual machine scale set that is behind a load balancer and doesnΓÇÖt have a public IP address. By comparing the number of SNAT ports youΓÇÖre using with the Allocated SNAT Ports metric, you can determine if your service is experiencing or at risk of SNAT exhaustion and resulting outbound flow failure.
If your metrics indicate risk of [outbound flow](./load-balancer-outbound-connections.md) failure, reference the article and take steps to mitigate this to ensure service health.
To view SNAT port usage and allocation:
2. Select **Used SNAT Ports** and/or **Allocated SNAT Ports** as the metric type and **Average** as the aggregation.
- * By default these metrics are the average number of SNAT ports allocated to or used by each backend VM or virtual machine scale set, corresponding to all frontend public IPs mapped to the load balancer, aggregated over TCP and UDP.
+ * By default, these metrics are the average number of SNAT ports allocated to or used by each backend VM or virtual machine scale set. They correspond to all frontend public IPs mapped to the load balancer, aggregated over TCP and UDP.
* To view total SNAT ports used by or allocated for the load balancer use metric aggregation **Sum**.
Azure Load Balancer supports easily configurable alerts for multi-dimensional me
To configure alerts:
-1. Go to the alert sub-blade for the load balancer
+1. Go to the alert page for the load balancer
2. Create new alert rule
- 1. Configure alert condition (Note: to avoid noisy alerts, we recommend configuring alerts with the Aggregation type set to Average, looking back on a 5 minute window of data, and with a threshold of 95%)
+ 1. Configure alert condition (Note: to avoid noisy alerts, we recommend configuring alerts with the Aggregation type set to Average, looking back on a five-minute window of data, and with a threshold of 95%)
2. (Optional) Add action group for automated repair
To configure alerts:
>[!NOTE] > If your load balancer's backend pools are empty, the load balancer will not have any valid data paths to test. As a result, the data path availability metric will not be available, and any configured Azure Alerts on the data path availability metric will not trigger.
-To alert for inbound availability, you can create two separate alerts using the data path availability and health probe status metrics. Customers may have different scenarios that require specific alerting logic, but the below examples will be helpful for most configurations.
+To alert for inbound availability, you can create two separate alerts using the data path availability and health probe status metrics. Customers may have different scenarios that require specific alerting logic, but the below examples are helpful for most configurations.
Using data path availability, you can fire alerts whenever a specific load-balancing rule becomes unavailable. You can configure this alert by setting an alert condition for the data path availability and splitting by all current values and future values for both frontend port and frontend IP address. Setting the alert logic to be less than or equal to 0 will cause this alert to be fired whenever any load-balancing rule becomes unresponsive. Set the aggregation granularity and frequency of evaluation according to your desired evaluation.
-With health probe status you can alert when a given backend instance fails to respond to the health probe for a significant amount of time. Set up your alert condition to use the health probe status metric and split by backend IP address and backend port. This will ensure that you can alert separately for each individual backend instanceΓÇÖs ability to serve traffic on a specific port. Use the **Average** aggregation type and set the threshold value according to how frequently your backend instance is probed and what you consider to be your healthy threshold.
+With health probe status, you can alert when a given backend instance fails to respond to the health probe for a significant amount of time. Set up your alert condition to use the health probe status metric and split by backend IP address and backend port. This will ensure that you can alert separately for each individual backend instanceΓÇÖs ability to serve traffic on a specific port. Use the **Average** aggregation type and set the threshold value according to how frequently your backend instance is probed and your considered healthy threshold.
-You can also alert on a backend pool level by not splitting by any dimensions and using the **Average** aggregation type. This will allow you to set up alert rules such as alert when 50% of my backend pool members are unhealthy.
+You can also alert on a backend pool level by not splitting by any dimensions and using the **Average** aggregation type. This allows you to set up alert rules such as alert when 50% of my backend pool members are unhealthy.
### Outbound availability alerting
-To configure for outbound availability, you can configure two separate alerts using the SNAT connection count and used SNAT port metrics.
+For outbound availability, you can configure two separate alerts using the SNAT connection count and used SNAT port metrics.
-To detect outbound connection failures, configure an alert using SNAT connection count and filtering to **Connection State = Failed**. Use the **Total** aggregation. You can then also split this by backend IP address set to all current and future values to alert separately for each backend instance experiencing failed connections. Set the threshold to be greater than zero or a higher number if you expect to see some outbound connection failures.
+To detect outbound connection failures, configure an alert using SNAT connection count and filtering to **Connection State = Failed**. Use the **Total** aggregation. Then, you can split this by backend IP address set to all current and future values to alert separately for each backend instance experiencing failed connections. Set the threshold to be greater than zero or a higher number if you expect to see some outbound connection failures.
-With used SNAT ports you can alert on a higher risk of SNAT exhaustion and outbound connection failure. Ensure youΓÇÖre splitting by backend IP address and protocol when using this alert. Use the **Average** aggregation. Set the threshold to be greater than a percentage of the number of ports youΓÇÖve allocated per instance that you determine is unsafe. For example, configure a low severity alert when a backend instance uses 75% of its allocated ports. Configure a high severity alert when it uses 90% or 100% of its allocated ports.
+With used SNAT ports, you can alert on a higher risk of SNAT exhaustion and outbound connection failure. Ensure youΓÇÖre splitting by backend IP address and protocol when using this alert. Use the **Average** aggregation. Set the threshold to be greater than a percentage of the number of ports youΓÇÖve allocated per instance that you determine is unsafe. For example, configure a low severity alert when a backend instance uses 75% of its allocated ports. Configure a high severity alert when it uses 90% or 100% of its allocated ports.
## <a name = "ResourceHealth"></a>Resource health status
Health status for the standard load balancer resources is exposed via the existi
| Resource health status | Description | | | | | Available | Your standard load balancer resource is healthy and available. |
-| Degraded | Your standard load balancer has platform or user initiated events impacting performance. The metric for data path availability has reported less than 90% but greater than 25% health for at least two minutes. YouΓÇÖll experience moderate to severe performance effect. [Follow the troubleshooting RHC guide](./troubleshoot-rhc.md) to determine whether there are user initiated events causing impacting your availability.
-| Unavailable | Your standard load balancer resource isnΓÇÖt healthy. The metric for data path availability has reported less the 25% health for at least two minutes. YouΓÇÖll experience significant performance effect or lack of availability for inbound connectivity. There may be user or platform events causing unavailability. [Follow the troubleshooting RHC guide](./troubleshoot-rhc.md) to determine whether there are user initiated events impacting your availability. |
+| Degraded | Your standard load balancer has platform or user initiated events impacting performance. The metric for data path availability has reported less than 90% but greater than 25% health for at least two minutes. With this status, you experience moderate to severe performance effect. [Follow the troubleshooting RHC guide](./troubleshoot-rhc.md) to determine whether there are user initiated events causing impacting your availability.
+| Unavailable | Your standard load balancer resource isnΓÇÖt healthy. The metric for data path availability has reported less the 25% health for at least two minutes. With this status, you experience significant performance effect or lack of availability for inbound connectivity. There may be user or platform events causing unavailability. [Follow the troubleshooting RHC guide](./troubleshoot-rhc.md) to determine whether there are user initiated events impacting your availability. |
| Unknown | Health status for your load balancer resource hasnΓÇÖt been updated or hasnΓÇÖt received information for data path availability for the last 10 minutes. This state should be transient and will reflect correct status as soon as data is received. | To view the health of your public standard load balancer resources:
A generic description of a resource health status is available in the [resource
## Next steps - Learn about [Network Analytics](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics).-- Learn about using [Insights](./load-balancer-insights.md) to view these metrics pre-configured for your load balancer.
+- Learn about using [Insights](./load-balancer-insights.md) to view these metrics preconfigured for your load balancer.
- Learn more about [Standard load balancer](./load-balancer-overview.md).
load-balancer Load Balancer Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-virtual-machine-scale-sets.md
Previously updated : 07/17/2020 Last updated : 05/03/2023 -+ # Guidance for Virtual Machine Scale Sets with Azure Load Balancer
When you use the Virtual Machine Scale Set in the back-end pool of the load bala
## Virtual Machine Scale Set instance-level IPs
-When Virtual Machine Scale Sets with [public IPs per instance](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard).
+When Virtual Machine Scale Sets with [public IPs per instance](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md) are created with a load balancer in front, the SKU of the Load Balancer (that is, Basic or Standard) determines the SKU of the instance IPs.
## Outbound rules
load-balancer Outbound Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/outbound-rules.md
Title: Outbound rules Azure Load Balancer
-description: This article explains how to configure outbound rules to control egress of internet traffic with Azure Load Balancer.
+description: This article explains how to configure outbound rules to control outbound internet traffic with Azure Load Balancer.
With outbound rules, you have full declarative control over outbound internet co
Outbound rules will only be followed if the backend VM doesn't have an instance-level public IP address (ILPIP). With outbound rules, you can explicitly define outbound **SNAT** behavior.
You can use this parameter in two ways:
2. Tune the outbound **SNAT** parameters of an IP address used for inbound and outbound simultaneously. The automatic outbound NAT must be disabled to allow an outbound rule to take control. To change the SNAT port allocation of an address also used for inbound, the `disableOutboundSnat` parameter must be set to true.
-The operation to configure an outbound rule will fail if you attempt to redefine an IP address that is used for inbound. Disable the outbound NAT of the load-balancing rule first.
+The operation to configure an outbound rule fails if you attempt to redefine an IP address that is used for inbound. Disable the outbound NAT of the load-balancing rule first.
>[!IMPORTANT] > Your virtual machine will not have outbound connectivity if you set this parameter to true and do not have an outbound rule to define outbound connectivity. Some operations of your VM or your application may depend on having outbound connectivity available. Make sure you understand the dependencies of your scenario and have considered impact of making this change.
Each public IP address contributes up to 64,000 ephemeral ports. The number of V
You can use outbound rules to tune the SNAT ports given by default. You give more or less than the default [SNAT](load-balancer-outbound-connections.md) port allocation provides. Each public IP address from a frontend of an outbound rule contributes up to 64,000 ephemeral ports for use as [SNAT](load-balancer-outbound-connections.md) ports.
-Load balancer gives [SNAT](load-balancer-outbound-connections.md) ports in multiples of 8. If you provide a value not divisible by 8, the configuration operation is rejected. Each load balancing rule and inbound NAT rule will consume a range of eight ports. If a load balancing or inbound NAT rule shares the same range of 8 as another, no extra ports will be consumed.
+Load balancer gives [SNAT](load-balancer-outbound-connections.md) ports in multiples of 8. If you provide a value not divisible by 8, the configuration operation is rejected. Each load balancing rule and inbound NAT rule consume a range of eight ports. If a load balancing or inbound NAT rule shares the same range of 8 as another, no extra ports are consumed.
If you attempt to give out more [SNAT](load-balancer-outbound-connections.md) ports than are available (based on the number of public IP addresses), the configuration operation is rejected. For example, if you give 10,000 ports per VM and seven VMs in a backend pool share a single public IP, the configuration is rejected. Seven multiplied by 10,000 exceeds the 64,000 port limit. Add more public IP addresses to the frontend of the outbound rule to enable the scenario.
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
Previously updated : 12/22/2021 Last updated : 04/20/2023 + # Azure Load Balancer SKUs
To compare and understand the differences between Basic and Standard SKU, see th
| **[Multiple front ends](./load-balancer-multivip-overview.md)** | Inbound and [outbound](./load-balancer-outbound-connections.md) | Inbound only | | **Management Operations** | Most operations < 30 seconds | 60-90+ seconds typical | | **SLA** | [99.99%](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/) | Not available |
-| **Global VNet Peering Support** | Standard ILB is supported via Global VNet Peering | Not supported |
-| **[NAT Gateway Support](../virtual-network/nat-gateway/nat-overview.md)** | Both Standard ILB and Standard Public LB are supported via Nat Gateway | Not supported |
-| **[Private Link Support](../private-link/private-link-overview.md)** | Standard ILB is supported via Private Link | Not supported |
-| **[Global tier (Preview)](./cross-region-overview.md)** | Standard LB supports the Global tier for Public LBs enabling cross-region load balancing | Not supported |
+| **Global VNet Peering Support** | Standard Internal Load Balancer is supported via Global VNet Peering | Not supported |
+| **[NAT Gateway Support](../virtual-network/nat-gateway/nat-overview.md)** | Both Standard Internal Load Balancer and Standard Public Load Balancer are supported via Nat Gateway | Not supported |
+| **[Private Link Support](../private-link/private-link-overview.md)** | Standard Internal Load Balancer is supported via Private Link | Not supported |
+| **[Global tier (Preview)](./cross-region-overview.md)** | Standard Load Balancer supports the Global tier for Public Load Balancers enabling cross-region load balancing | Not supported |
For more information, see [Load balancer limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer). For Standard Load Balancer details, see [overview](./load-balancer-overview.md), [pricing](https://aka.ms/lbpricing), and [SLA](https://aka.ms/lbsla). For information on Gateway SKU - catered for third-party network virtual appliances (NVAs), see [Gateway Load Balancer overview](gateway-overview.md)
load-testing Resource Supported Azure Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-supported-azure-resource-types.md
This section lists the Azure resource types that Azure Load Testing supports for
* Key Vault * Service Bus * Static Web Apps
-* Storage Accounts: Azure Blog Storage/Azure Files/Azure Table Storage/Queue Storage
+* Storage Accounts: Azure Blob Storage/Azure Files/Azure Table Storage/Queue Storage
* Storage Accounts (classic): Azure Files/Azure Table Storage/Queue Storage * Traffic Manager profile * Virtual Machine Scale Sets
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
For Azure Logic Apps to receive incoming communication through your firewall, yo
| North Central US | 168.62.249.81, 157.56.12.202, 65.52.211.164, 65.52.9.64, 52.162.177.104, 23.101.174.98 | | North Europe | 13.79.173.49, 52.169.218.253, 52.169.220.174, 40.112.90.39, 40.127.242.203, 51.138.227.94, 40.127.145.51 | | Norway East | 51.120.88.93, 51.13.66.86, 51.120.89.182, 51.120.88.77, 20.100.27.17, 20.100.36.102 |
+| Norway West | 51.120.220.160, 51.120.220.161, 51.120.220.162, 51.120.220.163, 51.13.155.184, 51.13.151.90 |
| South Africa North | 102.133.228.4, 102.133.224.125, 102.133.226.199, 102.133.228.9, 20.87.92.64, 20.87.91.171 | | South Africa West | 102.133.72.190, 102.133.72.145, 102.133.72.184, 102.133.72.173, 40.117.9.225, 102.133.98.91 | | South Central US | 13.65.98.39, 13.84.41.46, 13.84.43.45, 40.84.138.132, 20.94.151.41, 20.88.209.113 |
This section lists the outbound IP addresses that Azure Logic Apps requires in y
| North Central US | 168.62.248.37, 157.55.210.61, 157.55.212.238, 52.162.208.216, 52.162.213.231, 65.52.10.183, 65.52.9.96, 65.52.8.225, 52.162.177.90, 52.162.177.30, 23.101.160.111, 23.101.167.207 | | North Europe | 40.113.12.95, 52.178.165.215, 52.178.166.21, 40.112.92.104, 40.112.95.216, 40.113.4.18, 40.113.3.202, 40.113.1.181, 40.127.242.159, 40.127.240.183, 51.138.226.19, 51.138.227.160, 40.127.144.251, 40.127.144.121 | | Norway East | 51.120.88.52, 51.120.88.51, 51.13.65.206, 51.13.66.248, 51.13.65.90, 51.13.65.63, 51.13.68.140, 51.120.91.248, 20.100.26.148, 20.100.26.52, 20.100.36.49, 20.100.36.10 |
+| Norway West | 51.120.220.128, 51.120.220.129, 51.120.220.130, 51.120.220.131, 51.120.220.132, 51.120.220.133, 51.120.220.134, 51.120.220.135, 51.13.153.172, 51.13.148.178, 51.13.148.11, 51.13.149.162 |
| South Africa North | 102.133.231.188, 102.133.231.117, 102.133.230.4, 102.133.227.103, 102.133.228.6, 102.133.230.82, 102.133.231.9, 102.133.231.51, 20.87.92.40, 20.87.91.122, 20.87.91.169, 20.87.88.47 | | South Africa West | 102.133.72.98, 102.133.72.113, 102.133.75.169, 102.133.72.179, 102.133.72.37, 102.133.72.183, 102.133.72.132, 102.133.75.191, 102.133.101.220, 40.117.9.125, 40.117.10.230, 40.117.9.229 | | South Central US | 104.210.144.48, 13.65.82.17, 13.66.52.232, 23.100.124.84, 70.37.54.122, 70.37.50.6, 23.100.127.172, 23.101.183.225, 20.94.150.220, 20.94.149.199, 20.88.209.97, 20.88.209.88 |
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
ml_client.online_endpoints.get(name=endpoint_name, local=True)
The method returns [`ManagedOnlineEndpoint` entity](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint). The `provisioning_state` is `Succeeded`.
-```python
+```
ManagedOnlineEndpoint({'public_network_access': None, 'provisioning_state': 'Succeeded', 'scoring_uri': 'http://localhost:49158/score', 'swagger_uri': None, 'name': 'endpt-10061534497697', 'description': 'this is a sample endpoint', 'tags': {}, 'properties': {}, 'id': None, 'Resource__source_path': None, 'base_path': '/path/to/your/working/directory', 'creation_context': None, 'serialize': <msrest.serialization.Serializer object at 0x7ffb781bccd0>, 'auth_mode': 'key', 'location': 'local', 'identity': None, 'traffic': {}, 'mirror_traffic': {}, 'kind': None}) ```
ml_client.online_endpoints.invoke(
If you want to use a REST client (like curl), you must have the scoring URI. To get the scoring URI, run the following code. In the returned data, find the `scoring_uri` attribute. Sample curl based commands are available later in this doc. ```python
-endpoint = ml_client.online_endpoints.get(endpoint_name)
+endpoint = ml_client.online_endpoints.get(endpoint_name, local=True)
scoring_uri = endpoint.scoring_uri ```
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Title: Set up service authentication
description: Learn how to set up and configure authentication between Azure Machine Learning and other Azure services. --++
For automated creation of role assignments on your user-assigned managed identit
> [!TIP] > For a workspace with [customer-managed keys for encryption](concept-data-encryption.md), you can pass in a user-assigned managed identity to authenticate from storage to Key Vault. Use the `user-assigned-identity-for-cmk-encryption` (CLI) or `user_assigned_identity_for_cmk_encryption` (SDK) parameters to pass in the managed identity. This managed identity can be the same or different as the workspace primary user assigned managed identity.
+To create a workspace with user assigned identity, use one of the following methods:
+
+# [Azure CLI](#tab/cli)
++
+```azurecli
+az ml workspace create -f workspace_uai.yml
+```
+
+Where the contents of *workspace_uai.yml* are as follows:
+
+```yaml
+name: <workspace name>
+location: <region name>
+resource_group: <resource group name>
+identity:
+ type: user_assigned
+ tenant_id: <tenant ID>
+ user_assigned_identities:
+ '<UAI resource ID 1>': {}
+ '<UAI resource ID 2>': {}
+storage_account: <storage acccount resource ID>
+key_vault: <key vault resource ID>
+image_build_compute: <compute(virtual machine) resource ID>
+primary_user_assigned_identity: <one of the UAI resource IDs in the above list>
+```
+
+# [Python SDK](#tab/python)
++
+```python
+from azure.ai.ml import MLClient, load_workspace
+from azure.identity import DefaultAzureCredential
+sub_id="<subscription ID>"
+rg_name="<resource group name>"
+ws_name="<workspace name>"
+client = MLClient(DefaultAzureCredential(), sub_id, rg_name)
+wps = load_workspace("workspace_uai.yml")
+workspace = client.workspaces.begin_create(workspace=wps).result()
+# update SAI workspace to SAI&UAI workspace
+wps = load_workspace("workspace_sai_and_uai.yml")
+workspace = client.workspaces.begin_update(workspace=wps).result()
+```
+
+Where the contents of *workspace_sai_and_uai.yml* are as follows:
+
+```yaml
+name: <workspace name>
+location: <region name>
+resource_group: <resource group name>
+identity:
+ type: system_assigned, user_assigned
+ tenant_id: <tenant ID>
+ user_assigned_identities:
+ '<UAI resource ID 1>': {}
+ '<UAI resource ID 2>': {}
+storage_account: <storage acccount resource ID>
+key_vault: <key vault resource ID>
+image_build_compute: <compute(virtual machine) resource ID>
+primary_user_assigned_identity: <one of the UAI resource IDs in the above list>
+```
+
+# [Studio](#tab/azure-studio)
+
+Not supported currently.
+++ ### Compute cluster > [!NOTE]
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
Defining paths to read Delta Lake tables is different compared to the other file
```python import mltable
-# define the path containing the delta table (where the _delta_log file is stored)
+# define the cloud path containing the delta table (where the _delta_log file is stored)
delta_table = "abfss://<file_system>@<account_name>.dfs.core.windows.net/<path_to_delta_table>" # create an MLTable. Note the timestamp_as_of parameter for time travel.
tbl = mltable.from_delta_lake(
) ```
+If you want to get the latest version of Delta Lake data, you can pass current timestamp into `timestamp_as_of`.
+
+```python
+import mltable
+
+# define the relative path containing the delta table (where the _delta_log file is stored)
+delta_table_path = "./working-directory/delta-sample-data"
+
+# get the current timestamp in the required format
+current_timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
+print(current_timestamp)
+tbl = mltable.from_delta_lake(delta_table_path, timestamp_as_of=current_timestamp)
+df = tbl.to_pandas_dataframe()
+```
+ ### Files, folders and globs Azure Machine Learning Tables support reading from:
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
When deploying a machine learning model to a batch endpoint, you can secure thei
## Securing batch endpoints
-Batch endpoints inherent the networking configuration from the workspace where they are deployed. All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. In order to have fully operational batch endpoints working with private networking, follow the following steps:
+Batch endpoints inherit the networking configuration from the workspace where they are deployed. All the batch endpoints created inside of private link-enabled workspace are deployed as private batch endpoints by default. When the workspace is correctly configured, no further configuration is required.
+
+To verify that your workspace is correctly configured for batch endpoints to work with private networking , ensure the following:
1. You have configured your Azure Machine Learning workspace for private networking. For more details about how to achieve it read [Create a secure workspace](tutorial-create-secure-workspace.md).
Batch endpoints inherent the networking configuration from the workspace where t
3. Ensure blob, file, queue, and table private endpoints are configured for the storage accounts as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). Batch deployments require all the 4 to properly work.
-4. Create the batch endpoint as regularly done.
- The following diagram shows how the networking looks like for batch endpoints when deployed in a private workspace: :::image type="content" source="./media/how-to-secure-batch-endpoint/batch-vnet-peering.png" alt-text="Diagram that shows the high level architecture of a secure Azure Machine Learning workspace deployment.":::
+> [!CAUTION]
+> Batch Endpoints, as opposite to Online Endpoints, don't use Azure Machine Learning managed VNets. Hence, they don't support the keys `public_network_access` or `egress_public_network_access`. It is not possible to deploy public batch endpoints on private link-enabled workspaces.
## Securing batch deployment jobs Azure Machine Learning batch deployments run on compute clusters. To secure batch deployment jobs, those compute clusters have to be deployed in a virtual network too. 1. Create an Azure Machine Learning [computer cluster in the virtual network](how-to-secure-training-vnet.md).
-2. Ensure all related services have private endpoints configured in the network. Private endpoints are used for not only Azure Machine Learning workspace, but also its associated resources such as Azure Storage, Azure Key Vault, or Azure Container Registry. Azure Container Registry is a required service. While securing the Azure Machine Learning workspace with virtual networks, please note that there are [some prerequisites about Azure Container Registry](how-to-secure-workspace-vnet.md#prerequisites).
-4. If your compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#compute-instancecluster-with-public-ip) so that management services can submit jobs to your compute resources.
+
+1. Ensure all related services have private endpoints configured in the network. Private endpoints are used for not only Azure Machine Learning workspace, but also its associated resources such as Azure Storage, Azure Key Vault, or Azure Container Registry. Azure Container Registry is a required service. While securing the Azure Machine Learning workspace with virtual networks, please note that there are [some prerequisites about Azure Container Registry](how-to-secure-workspace-vnet.md#prerequisites).
+
+1. If your compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#compute-instancecluster-with-public-ip) so that management services can submit jobs to your compute resources.
> [!TIP] > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
Azure Machine Learning batch deployments run on compute clusters. To secure batc
For more information, see the [Secure an Azure Machine Learning training environment with virtual networks](how-to-secure-training-vnet.md) article.
-## Using two-networks architecture
-
-There are cases where the input data is not in the same network as in the Azure Machine Learning resources. In those cases, your Azure Machine Learning workspace may need to interact with more than one VNet. You can achieve this configuration by adding an extra set of private endpoints to the VNet where the rest of the resources are located.
-
-The following diagram shows the high level design:
--
-### Considerations
-
-Have the following considerations when using such architecture:
-
-* Put the second set of private endpoints in a different resource group and hence in different private DNS zones. It prevents a name resolution conflict between the set of IPs used for the workspace and the ones used by the client VNets. Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today. Note that the DNS resolution against a private DNS zone works only from virtual networks that are linked to it. For more details, see [recommended zone names for Azure services](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
-* For your storage accounts, add 4 private endpoints in each VNet for blob, file, queue, and table as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
- ## Limitations Consider the following limitations when working on batch endpoints deployed regarding networking:
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
These prerequisites cover the submission of a Spark job from Azure Machine Learn
1. Navigate to Azure Machine Learning studio UI. 2. Select **Manage preview features** (megaphone icon) from the icons on the top right side of the screen. 3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/how_to_enable_managed_spark_preview.png" alt-text="Screenshot showing option for enabling Managed Spark preview.":::
+ :::image type="content" source="media/how-to-submit-spark-jobs/how-to-enable-managed-spark-preview.png" alt-text="Screenshot showing option for enabling Managed Spark preview.":::
- [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
ml_client.jobs.stream(returned_spark_job.name)
To submit a standalone Spark job using the Azure Machine Learning studio UI: - In the left pane, select **+ New**. - Select **Spark job (preview)**. - On the **Compute** screen: 1. Under **Select compute type**, select **Spark automatic compute (Preview)** for serverless Spark compute, or **Attached compute** for an attached Synapse Spark pool. 1. If you selected **Spark automatic compute (Preview)**:
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
description: Learn about using Data-in replication to synchronize from an extern
Previously updated : 12/30/2022 Last updated : 05/02/2023
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-Data-in replication allows you to synchronize data from an external MySQL server into an Azure Database for MySQL flexilbe server. The external server can be on-premises, in virtual machines, Azure Database for MySQL single server, or a database service hosted by other cloud providers. Data-in replication is based on the binary log (binlog) file position-based. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+Data-in replication allows you to synchronize data from an external MySQL server into an Azure Database for MySQL flexible server. The external server can be on-premises, in virtual machines, Azure Database for MySQL single server, or a database service hosted by other cloud providers. Data-in replication is based on the binary log (binlog) file position or GTID-based replication. To learn more about binlog replication, see the [MySQL Replication](https://dev.mysql.com/doc/refman/5.7/en/replication-configuration.html).
> [!NOTE]
-> GTID-based replication is currently not supported for Azure Database for MySQL - Flexible Servers.<br>
-> Configuring Data-in replication for zone-redundant high-availability servers is not supported.
+> Configuring Data-in replication for zone-redundant high-availability servers is supported only through GTID-based replication
## When to use Data-in replication The main scenarios to consider about using Data-in replication are: -- **Hybrid Data Synchronization:** With Data-in replication, you can keep data synchronized between your on-premises servers and Azure Database for MySQL - Flexible Server. This synchronization is useful for creating hybrid applications. This method is appealing when you have an existing local database server but want to move the data to a region closer to end users.
+- **Hybrid Data Synchronization:** With Data-in replication, you can keep data synchronized between your on-premises servers and Azure Database for MySQL - Flexible Server. This synchronization helps create hybrid applications. This method is appealing when you have an existing local database server but want to move the data to a region closer to end users.
- **Multi-Cloud Synchronization:** For complex cloud solutions, use Data-in replication to synchronize data between Azure Database for MySQL - Flexible Server and different cloud providers, including virtual machines and database services hosted in those clouds.-- **Migration:** Customers can do Minimal Time migration using open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with Data-in replication. A selective cutover of production load from source to destination database is possible with Data-in replication.
+- **Migration:** Customers can migrate in Minimal Time using open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with Data-in replication. A selective cutover of production load from source to destination database is possible with Data-in replication.
For migration scenarios, use the [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/)(DMS).
For migration scenarios, use the [Azure Database Migration Service](https://azur
The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand the tables in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html).
-### Data-in replication not supported on High Availability (HA) enabled servers
+### Data-in replication is supported on High Availability (HA) enabled servers
-It isn't supported to configure Data-in replication for servers that have high availability (HA) option enabled. On HA-enabled servers, the stored procedures for replication `mysql.az_replication_*` won't be available.
+Support for data-in replication for high availability (HA) enabled server is available only through GTID-based replication.
-> [!TIP]
-> If you are using the HA server as a source server, MySQL native binary log (binlog) file position-based replication will fail when failover happens on the server. If replica server supports GTID based replication, we should configure GTID based replication.
+The stored procedure for replication using GTID is available on all HA-enabled servers by the name `mysql.az_replication_with_gtid`.
### Filter
-Parameter `replicate_wild_ignore_table` is used to create replication filter for tables on the replica server. To modify this parameter from Azure portal, navigate to Azure Database for MySQL flexible server used as replica and select "Server Parameters" to view/edit the `replicate_wild_ignore_table` parameter.
+The parameter `replicate_wild_ignore_table` creates a replication filter for tables on the replica server. To modify this parameter from the Azure portal, navigate to Azure Database for MySQL flexible server used as replica and select "Server Parameters" to view/edit the `replicate_wild_ignore_table` parameter.
### Requirements - The source server version must be at least MySQL version 5.7. - Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7, or both must be MySQL version 8.0.-- Our recommendation is to have a primary key in each table. If we have a table without primary key, you might face slowness in replication.
+- Our recommendation is to have a primary key in each table. If we have a table without a primary key, you might face slowness in replication.
- The source server should use the MySQL InnoDB engine.-- User must have the right permissions to configure binary logging and create new users on the source server.-- Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
+- The user must have the right permissions to configure binary logging and create new users on the source server.
+- Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL, refer to how to configure binlog_expire_logs_seconds for [flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` stored procedure. Refer to the following [examples](./how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter. - Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306. - Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-azure-cli.md
Previously updated : 03/01/2021 Last updated : 05/03/2023 # Quickstart: Connect and query with Azure CLI with Azure Database for MySQL - Flexible Server
This quickstart demonstrates how to connect to an Azure Database for MySQL - Fle
- An Azure account with an active subscription. [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]-- Install [Azure CLI](/cli/azure/install-azure-cli) latest version (2.20.0 or above)
+- Install [Azure CLI](/cli/azure/install-azure-cli) latest version (2.20.0 or higher)
- Log in using Azure CLI with ```az login``` command-- Turn on parameter persistence with ```az config param-persist on```. Parameter persistence will help you use local context without having to repeat a lot of arguments like resource group or location etc.
+- Turn on parameter persistence with ```az config param-persist on```. Parameter persistence helps you use local context without having to repeat numerous arguments like resource group or location etc.
## Create a MySQL Flexible Server
-The first thing we'll create is a managed MySQL server. In [Azure Cloud Shell](https://shell.azure.com/), run the following script and make a note of the **server name**, **username** and **password** generated from this command.
+The first thing we create is a managed MySQL server. In [Azure Cloud Shell](https://shell.azure.com/), run the following script and make a note of the **server name**, **username** and **password** generated from this command.
-```azurecli
+```azurecli-interactive
az mysql flexible-server create --public-access <your-ip-address> ```
-You can provide additional arguments for this command to customize it. See all arguments for [az mysql flexible-server create](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-create).
+You can provide more arguments for this command to customize it. See all arguments for [az mysql flexible-server create](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-create).
## Create a database
-Run the following command to create a database, **newdatabase** if you have not already created one.
-```azurecli
+Run the following command to create a database, `newdatabase` if you haven't already created one.
+
+```azurecli-interactive
az mysql flexible-server db create -d newdatabase ``` ## View all the arguments+ You can view all the arguments for this command with ```--help``` argument.
-```azurecli
+```azurecli-interactive
az mysql flexible-server connect --help ``` ## Test database server connection+ Run the following script to test and validate the connection to the database from your development environment.
-```azurecli
+```azurecli-interactive
az mysql flexible-server connect -n <servername> -u <username> -p <password> -d <databasename> ``` **Example:**
-```azurecli
+
+```azurecli-interactive
az mysql flexible-server connect -n mysqldemoserver1 -u dbuser -p "dbpassword" -d newdatabase ```
Command group 'mysql flexible-server' is in preview and under development. Refer
Connecting to newdatabase database. Successfully connected to mysqldemoserver1. ```+ If the connection failed, try these solutions:+ - Check if port 3306 is open on your client machine. - If your server administrator user name and password are correct - If you have configured firewall rule for your client machine - If you have configured your server with private access in virtual networking, make sure your client machine is in the same virtual network. ## Run multiple queries using interactive mode+ You can run multiple queries using the **interactive** mode. To enable interactive mode, run the following command
-```azurecli
+```azurecli-interactive
az mysql flexible-server connect -n <server-name> -u <username> -p <password> --interactive ``` **Example:**
-```azurecli
+
+```azurecli-interactive
az mysql flexible-server connect -n mysqldemoserver1 -u dbuser -p "dbpassword" -d newdatabase --interactive ```
-You will see the **MySQL** shell experience as shown below:
+You can see the **MySQL** shell experience as shown below:
-```bash
+```mysql
Command group 'mysql flexible-server' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus Password: mysql 5.7.29-log
Your preference of are now saved to local context. To learn more, type in `az l
``` ## Run Single Query+ Run the following command to execute a single query using ```--querytext``` argument, ```-q```.
-```azurecli
+```azurecli-interactive
az mysql flexible-server execute -n <server-name> -u <username> -p "<password>" -d <database-name> --querytext "<query text>" ``` **Example:**
-```azurecli
+
+```azurecli-interactive
az mysql flexible-server execute -n mysqldemoserver1 -u dbuser -p "dbpassword" -d newdatabase -q "select * from table1;" --output table ```
-You will see an output as shown below:
+You can see an output as shown below:
```output Command group 'mysql flexible-server' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
test 200
``` ## Run SQL File+ You can execute a sql file with the command using ```--file-path``` argument, ```-q```.
-```azurecli
+```azurecli-interactive
az mysql flexible-server execute -n <server-name> -u <username> -p "<password>" -d <database-name> --file-path "<file-path>" ``` **Example:**
-```azurecli
+
+```azurecli-interactive
az mysql flexible-server execute -n mysqldemoserver -u dbuser -p "dbpassword" -d flexibleserverdb -f "./test.sql" ```
-You will see an output as shown below:
+You can see an output as shown below:
```output Command group 'mysql flexible-server' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-csharp.md
ms.devlang: csharp Previously updated : 01/16/2021 Last updated : 05/03/2023 # Quickstart: Use .NET (C#) to connect and query data in Azure Database for MySQL - Flexible Server
This quickstart demonstrates how to connect to an Azure Database for MySQL by us
For this quickstart you need: -- An Azure account with an active subscription.
+- An Azure account with an active subscription.
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] - Create an Azure Database for MySQL - Flexible Server using [Azure portal](./quickstart-create-server-portal.md) <br/> or [Azure CLI](./quickstart-create-server-cli.md) if you do not have one.
For this quickstart you need:
[Having issues? Let us know](https://github.com/MicrosoftDocs/azure-docs/issues) ## Create a C# project+ At a command prompt, run:
-```
+```bash
mkdir AzureMySqlExample cd AzureMySqlExample dotnet new console
dotnet add package MySqlConnector
``` ## Get connection information+ Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. 1. Log in to the [Azure portal](https://portal.azure.com/).
Get the connection information needed to connect to the Azure Database for MySQL
:::image type="content" source="./media/connect-csharp/server-overview-name-login.png" alt-text="Azure Database for MySQL server name"::: ## Step 1: Connect and insert data+ Use the following code to connect and load the data by using `CREATE TABLE` and `INSERT INTO` SQL statements. The code uses the methods of the `MySqlConnection` class:+ - [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL. - [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand), sets the CommandText property - [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands.
namespace AzureMySqlExample
## Step 2: Read data Use the following code to connect and read the data by using a `SELECT` SQL statement. The code uses the `MySqlConnection` class with methods:+ - [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL. - [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property. - [ExecuteReaderAsync()](/dotnet/api/system.data.common.dbcommand.executereaderasync) to run the database commands. - [ReadAsync()](/dotnet/api/system.data.common.dbdatareader.readasync#System_Data_Common_DbDataReader_ReadAsync) to advance to the records in the results. Then the code uses GetInt32 and GetString to parse the values in the record. - Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. ```csharp
namespace AzureMySqlExample
[Having issues? Let us know](https://github.com/MicrosoftDocs/azure-docs/issues) ## Step 3: Update data+ Use the following code to connect and read the data by using an `UPDATE` SQL statement. The code uses the `MySqlConnection` class with method:+ - [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL. - [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property - [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. - Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. ```csharp
namespace AzureMySqlExample
``` ## Step 4: Delete data+ Use the following code to connect and delete the data by using a `DELETE` SQL statement. The code uses the `MySqlConnection` class with method+ - [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL. - [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property. - [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands. - Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database. ```csharp
namespace AzureMySqlExample
To clean up all resources used during this quickstart, delete the resource group using the following command:
-```azurecli
+```azurecli-interactive
az group delete \ --name $AZ_RESOURCE_GROUP \ --yes ``` ## Next steps+ > [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using Portal](./how-to-manage-server-portal.md)<br/>
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
ms.devlang: java Previously updated : 10/20/2022 Last updated : 05/03/2023 # Use Java and JDBC with Azure Database for MySQL - Flexible Server
First, use the following command to set up some environment variables.
### [Passwordless (Recommended)](#tab/passwordless)
-```bash
+```azurecli-interactive
export AZ_RESOURCE_GROUP=database-workshop export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME> export AZ_LOCATION=<YOUR_AZURE_REGION>
Replace the placeholders with the following values, which are used throughout th
### [Password](#tab/password)
-```bash
+```azurecli-interactive
export AZ_RESOURCE_GROUP=database-workshop export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME> export AZ_LOCATION=<YOUR_AZURE_REGION>
Replace the placeholders with the following values, which are used throughout th
Next, create a resource group:
-```azurecli
+```azurecli-interactive
az group create \ --name $AZ_RESOURCE_GROUP \ --location $AZ_LOCATION \
The first thing you'll create is a managed MySQL server.
If you're using Azure CLI, run the following command to make sure it has sufficient permission:
-```bash
+```azurecli-interactive
az login --scope https://graph.microsoft.com/.default ``` Run the following command to create the server:
-```azurecli
+```azurecli-interactive
az mysql flexible-server create \ --resource-group $AZ_RESOURCE_GROUP \ --name $AZ_DATABASE_NAME \
az mysql flexible-server create \
Run the following command to create a user-assigned identity for assigning:
-```azurecli
+```azurecli-interactive
az identity create \ --resource-group $AZ_RESOURCE_GROUP \ --name $AZ_USER_IDENTITY_NAME
az identity create \
Run the following command to assign the identity to MySQL server for creating Azure AD admin:
-```azurecli
+```azurecli-interactive
az mysql flexible-server identity assign \ --resource-group $AZ_RESOURCE_GROUP \ --server-name $AZ_DATABASE_NAME \
az mysql flexible-server identity assign \
Run the following command to set the Azure AD admin user:
-```azurecli
+```azurecli-interactive
az mysql flexible-server ad-admin create \ --resource-group $AZ_RESOURCE_GROUP \ --server-name $AZ_DATABASE_NAME \
This command creates a small MySQL server and sets the Active Directory admin to
#### [Password](#tab/password)
-```azurecli
+```azurecli-interactive
az mysql flexible-server create \ --resource-group $AZ_RESOURCE_GROUP \ --name $AZ_DATABASE_NAME \
You can skip this step if you're using Bash because the `flexible-server create`
If you're connecting to your MySQL server from Windows Subsystem for Linux (WSL) on a Windows computer, you'll need to add the WSL host ID to your firewall. Obtain the IP address of your host machine by running the following command in WSL: ```bash
-cat /etc/resolv.conf
+sudo cat /etc/resolv.conf
``` Copy the IP address following the term `nameserver`, then use the following command to set an environment variable for the WSL IP Address:
AZ_WSL_IP_ADDRESS=<the-copied-IP-address>
Then, use the following command to open the server's firewall to your WSL-based app:
-```azurecli
+```azurecli-interactive
az mysql flexible-server firewall-rule create \ --resource-group $AZ_RESOURCE_GROUP \ --name $AZ_DATABASE_NAME \
az mysql flexible-server firewall-rule create \
Create a new database called `demo` by using the following command:
-```azurecli
+```azurecli-interactive
az mysql flexible-server db create \ --resource-group $AZ_RESOURCE_GROUP \ --database-name demo \
Congratulations! You've created a Java application that uses JDBC to store and r
To clean up all resources used during this quickstart, delete the resource group using the following command:
-```azurecli
+```azurecli-interactive
az group delete \ --name $AZ_RESOURCE_GROUP \ --yes
networking Virtual Network Powershell Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-filter-network-traffic.md
Title: Azure PowerShell script sample - Filter VM network traffic | Microsoft Docs
+ Title: Azure PowerShell script sample - Filter VM network traffic
description: Azure PowerShell script sample - Filter inbound and outbound VM network traffic.---+ - Previously updated : 05/16/2017- Last updated : 05/02/2023+ # Filter inbound and outbound VM network traffic
-This script sample creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, and HTTPS, while outbound traffic to the Internet from the back-end subnet is not permitted. After running the script, you will have one virtual machine with two NICs. Each NIC is connected to a different subnet.
+This script sample creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, and HTTPS, while outbound traffic to the Internet from the back-end subnet isn't permitted. After running the script, you'll have one virtual machine with two NICs. Each NIC is connected to a different subnet.
If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/), and then run `Connect-AzAccount` to create a connection with Azure.
This script uses the following commands to create a resource group, virtual netw
For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-Additional networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
+More networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
operator-nexus Howto Cluster Metrics Configuration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-metrics-configuration-management.md
Title: "Azure Operator Nexus: How to configure cluster metrics configuration management"
-description: Instructional for the inputs and methods for creating, updating, retrieving, and deleting cluster metrics configurations.
+description: Instructional for the inputs and methods for creating, listing, updating, retrieving, and deleting cluster metrics configurations.
# Cluster metrics configuration
-When the user deploys a Cluster, a standard set of metrics are enabled for collection. For the list of metrics, see
+When the user deploys a Cluster, a standard set of metrics gets enabled for collection. For the list of metrics, see
[List of Metrics Collected](List-of-metrics-collected.md). Users can't control the behavior (enable or disable) for collection of these included standard metrics. Though, users can control the collection of some optional metrics that aren't part of the link to the list. To enable this experience, users have to create and update a MetricsConfiguration resource for a cluster. By default, creation of this MetricsConfiguration resource doesn't change the collection of metrics. User has to update the resource to enable or disable these optional metrics collection.
az networkcloud cluster metricsconfiguration create \
* Replace values within `<` `>` with your specific information. * Query the cluster resource and find the value of `<CLUSTER-EXTENDED-LOCATION-ID>` in the `properties.clusterExtendedLocation`
-* The `collectionInterval` field is required, `enabledMetrics` is optional and may be omitted.
+* The `collection-interval` field is a mandatory field, and `enabled-metrics` is an optional field.
+
+Alternatively, operators can provide the list of enabled metrics via json or yaml file.
+
+Example: enabled-metrics.json file
+```json
+[
+ "metric_1",
+ "metric_2"
+]
+```
+
+Example: enabled-metrics.yaml file
+```yaml
+- "metric_1"
+- "metric_2"
+```
+
+Example command to use enabled-metrics json/yaml file:
+```azurecli
+az networkcloud cluster metricsconfiguration create \
+ --cluster-name "<CLUSTER>" \
+ --extended-location name="<CLUSTER_EXTENDED_LOCATION_ID>" type="CustomLocation" \
+ --location "<LOCATION>" \
+ --collection-interval <COLLECTION_INTERVAL (1-1440)> \
+ --enabled-metrics <path-to-yaml-or-json-file> \
+ --tags <TAG_KEY1>="<TAG_VALUE1>" <TAG_KEY2>="<TAG_VALUE2>" \
+ --resource-group "<RESOURCE_GROUP>"
+```
+
+Here, <path-to-yaml-or-json-file> can be ./enabled-metrics.json or ./enabled-metrics.yaml (place the file under current working directory) before performing the action.
> [!NOTE] > * The default metrics collection interval for standard set of metrics is set to every 5 minutes. Changing the `collectionInterval` will also impact the collection frequency for default standard metrics.
az networkcloud cluster metricsconfiguration create \
Specifying `--no-wait --debug` options in az cli command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md).
-### Metrics configuration elements
+#### Metrics configuration elements
| Parameter name | Description | | --| -- |
Specifying `--no-wait --debug` options in az cli command results in the executio
| TAG_VALUE1 | Optional tag1 value to pass to Cluster Create | | TAG_KEY2 | Optional tag2 to pass to Cluster create | | TAG_VALUE2 | Optional tag2 value to pass to Cluster create |
-| METRIC_TO_ENABLE_1 | Optional metric1 that is enabled in addition to the default metrics |
-| METRIC_TO_ENABLE_2 | Optional metric2 that is enabled in addition to the default metrics |
+| METRIC_TO_ENABLE_1 | Optional metric "METRIC_TO_ENABLE_1" enabled in addition to the default metrics |
+| METRIC_TO_ENABLE_2 | Optional metric "METRIC_TO_ENABLE_2" enabled in addition to the default metrics |
Specifying `--no-wait --debug` options in az cli command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md).
-## Retrieving a metrics configuration
+### List the metrics configuration
+
+You can check the metrics configuration resource for a specific cluster by using `az networkcloud cluster metricsconfiguration list` command:
++
+```azurecli
+az networkcloud cluster metricsconfiguration list \
+ --cluster-name "<CLUSTER>" \
+ --resource-group "<RESOURCE_GROUP>"
+```
+
+At max, only one metrics configuration resource can exist for a cluster.
+
+### Retrieving a metrics configuration
-After a metrics configuration is created, it can be retrieved using a `az rest` command:
+After a metrics configuration gets created, Operators can check the details for resource using `az networkcloud cluster metricsconfiguration show` command:
```azurecli
az networkcloud cluster metricsconfiguration show \
This command returns a JSON representation of the metrics configuration.
-## Updating a metrics configuration
+### Updating a metrics configuration
-Much like the creation of a metrics configuration, an update can be performed to change the configuration or update the tags assigned to the metrics configuration.
+Much like the creation of a metrics configuration, Operators can perform an update action to change the configuration or update the tags assigned to the metrics configuration.
```azurecli az networkcloud cluster metricsconfiguration update \
az networkcloud cluster metricsconfiguration update \
--resource-group "<RESOURCE_GROUP>" ```
-The `collection-interval` can be updated independently of `enabled-metrics` list. Omit fields that aren't being changed.
+Operators can update `collection-interval` independent of `enabled-metrics` list. Omit fields that aren't getting changed.
Specifying `--no-wait --debug` options in az cli command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md).
-## Deleting a metrics configuration
+### Deleting a metrics configuration
-Deletion of the metrics configuration returns the cluster to an unaltered configuration. To delete a metrics configuration, use the below command:
+Deletion of the metrics configuration returns the cluster to an unaltered configuration. To delete a metrics configuration, use the command:
```azurecli az networkcloud cluster metricsconfiguration delete \
operator-nexus List Of Metrics Collected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/list-of-metrics-collected.md
Title: List of Metrics Collected in Azure Operator Nexus. description: List of metrics collected in Azure Operator Nexus.---- Previously updated : 02/03/2023 #Required; mm/dd/yyyy format.-++++ Last updated : 02/03/2023+ # List of metrics collected in Azure Operator Nexus
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Last updated 4/10/2023
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL
+## Release: May 2023
+* Public preview of [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+ ## Release: April 2023 * Public preview of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server. * General availability: [Power BI integration](./connect-with-power-bi-desktop.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for Pos
| **Region name** | **Gateway IP addresses** | **Gateway IP address subnets** | |:-|:-|:|
-| Australia Central| 20.36.105.0 | 20.36.105.32/29 |
-| Australia Central2 | 20.36.113.0 | 20.36.113.32/29 |
-| Australia East | 13.75.149.87, 40.79.161.1 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 |
-| Australia South East |13.77.48.10, 13.77.49.32, 13.73.109.251 |13.77.49.32/29 |
+| Australia Central| | 20.36.105.32/29 |
+| Australia Central2 | | 20.36.113.32/29 |
+| Australia East | | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 |
+| Australia South East | |13.77.49.32/29 |
| Brazil South |191.233.201.8, 191.233.200.16 | 191.233.200.32/29, 191.234.144.32/29| | Canada Central |40.85.224.249, 52.228.35.221 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29|
-| Canada East | 40.86.226.166, 52.242.30.154 | 40.69.105.32/29 |
+| Canada East | | 40.69.105.32/29 |
| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29| | China East | 139.219.130.35 | 52.130.112.136/29 | | China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29|
The following table lists the gateway IP addresses of the Azure Database for Pos
| France Central | 40.79.137.0, 40.79.129.1 | 40.79.136.32/29, 40.79.144.32/29 | | France South | 40.79.177.0 | 40.79.176.40/29, 40.79.177.32/29| | Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29|
-| India Central | 104.211.96.159 | 104.211.86.32/29, 20.192.96.32/29|
-| India South | 104.211.224.146 | 40.78.192.32/29, 40.78.193.32/29|
+| India Central || 104.211.86.32/29, 20.192.96.32/29|
+| India South | | 40.78.192.32/29, 40.78.193.32/29|
| India West | 104.211.160.80 | 104.211.144.32/29, 104.211.145.32/29 | | Japan East | 40.79.192.23, 40.79.184.8 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
-| Japan West | 104.214.148.156, 40.74.96.6, 40.74.96.7 | 40.74.96.32/29 |
+| Japan West | | 40.74.96.32/29 |
| Korea Central | 52.231.17.13 | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 | | Korea South | 52.231.145.3 | | | North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.192/29|
The following table lists the gateway IP addresses of the Azure Database for Pos
| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29 | | UAE North | 65.52.248.0 | 40.120.72.32/29, 65.52.248.32/29 | | UK South | 51.140.184.11, 51.140.144.32, 51.105.64.0 |51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 |
-| UK West | 51.141.8.11 | 51.140.208.96/29, 51.140.209.32/29 |
-| West Central US | 13.78.145.25, 52.161.100.158 | 13.71.193.32/29 |
+| UK West | | 51.140.208.96/29, 51.140.209.32/29 |
+| West Central US | | 13.71.193.32/29 |
| West Europe |13.69.105.208, 104.40.169.187 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29| | West US |13.86.216.212, 13.86.217.212 |13.86.217.224/29| | West US 2 | 13.66.226.202, 13.66.136.192,13.66.136.195 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29|
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
You now have a minishell session set up ready to enable your Azure Kubernetes Se
## Enable Azure Kubernetes Service on the Azure Stack Edge device
-Run the following commands at the PowerShell prompt, specifying the object ID you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
+Run the following commands at the PowerShell prompt, specifying the object ID you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
```powershell Invoke-Command -Session $minishellSession -ScriptBlock {Set-HcsKubeClusterArcInfo -CustomLocationsObjectId *object ID*}
The Azure Private 5G Core private mobile network requires a custom location and
--cluster-extension-ids "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Kubernetes/connectedClusters/$RESOURCE_NAME/providers/Microsoft.KubernetesConfiguration/extensions/networkfunction-operator" ```
-You should see the new **Custom Location** visible as a resource in the Azure portal within the specified resource group. Using the `kubectl get pods -A` command (with access to your *kubeconfig* file) should also show new pods corresponding to the extensions that have been installed. There should be one pod in the *azurehybridnetwork* namespace, and one in the *packet-core-monitor* namespace.
+You should see the new **Custom location** visible as a resource in the Azure portal within the specified resource group. Using the `kubectl get pods -A` command (with access to your *kubeconfig* file) should also show new pods corresponding to the extensions that have been installed. There should be one pod in the *azurehybridnetwork* namespace, and one in the *packet-core-monitor* namespace.
## Rollback
Alternatively, you can perform a full reset using the **Device Reset** blade in
- **Azure Kubernetes Cluster** (if successfully created) - **Custom location** (if successfully created)
+## Changing ASE configuration after deployment
+
+You may need to update the ASE configuration after deploying the packet core, for example to add or remove an attached data network or to change an IP address. To change ASE configuration, destroy the **Custom location** and **Azure Kubernetes Service** resources, make your ASE configuration changes, and then recreate those resources. This allows you to temporarily disconnect the packet core instead of destroying and recreating it, minimizing the reconfiguration needed. You may also need to make equivalent changes to the packet core configuration.
+
+> [!CAUTION]
+> Your packet core will be unavailable during this procedure. If you're making changes to a healthy packet core instance, we recommend running this procedure during a maintenance window to minimize the impact on your service.
+
+1. Navigate to the resource group overview in the Azure portal (for the resource group containing the packet core). Select the **Packet Core Control Plane** resource and select **Modify packet core**. Set **Azure Arc Custom Location** to **None** and select **Modify**.
+1. Navigate to the resource group containing the **Custom location** resource. Select the tick box for the **Custom location** resource and select **Delete**. Confirm the deletion.
+1. Navigate to the **Azure Stack Edge** resource and remove all configuration for the **Azure Kubernetes Service**.
+1. Access the ASE local UI and update the configuration as needed.
+1. Recreate the Kubernetes cluster. See [Start the cluster and set up Arc](#start-the-cluster-and-set-up-arc).
+1. Recreate the custom location resource. Select the **Packet Core Control Plane** resource and select **Configure a custom location**.
+
+Your packet core should now be in service with the updated ASE configuration. To update the packet core configuration, see [Modify a packet core instance](modify-packet-core.md).
+ ## Next steps Your Azure Stack Edge device is now ready for Azure Private 5G Core. The next step is to collect the information you'll need to deploy your private network.
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
Title: Modify a packet core instance description: In this how-to guide, you'll learn how to modify a packet core instance using the Azure portal. --++ Previously updated : 09/29/2022 Last updated : 03/31/2023
-# Modify the packet core instance in a site
+# Modify a packet core instance
Each Azure Private 5G Core site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC). In this how-to guide, you'll learn how to modify a packet core instance using the Azure portal; this includes modifying the packet core's custom location, connected Azure Stack Edge (ASE) device, and access network configuration. You'll also learn how to add and modify the data networks attached to the packet core instance.
To modify the packet core and/or access network configuration:
## Attach a data network
+> [!IMPORTANT]
+> You must configure the ASE device with interfaces corresponding to the data networks before you can attach them to the packet core. See [Changing ASE configuration after deployment](commission-cluster.md#changing-ase-configuration-after-deployment).
+ To configure a new or existing data network and attach it to your packet core instance: 1. If you haven't already, [select the packet core instance to modify](#select-the-packet-core-instance-to-modify).
quotas Classic Deployment Model Quota Increase Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/classic-deployment-model-quota-increase-requests.md
- Title: Increase a VM-family vCPU quota for the Classic deployment model
-description: The Classic deployment model, now superseded by the Resource Manager model, enforces a global vCPU quota limit for VMs and virtual machine scale sets.
Previously updated : 12/02/2021---
-# Increase a VM-family vCPU quota for the Classic deployment model
-
-The Classic deployment model is the older generation Azure deployment model. It enforces a global vCPU quota limit for virtual machines and virtual machine scale sets. The Classic deployment model is no longer recommended, and is now superseded by the Resource Manager model.
-
-To learn more about these two deployment models and the advantages of using Resource Manager, see [Resource Manager and classic deployment](../azure-resource-manager/management/deployment-models.md).
-
-When a new subscription is created, a default quota of vCPUs is assigned to it. Any time a new virtual machine is deployed using the Classic deployment model, the sum of new and existing vCPU usage across all regions must not exceed the vCPU quota approved for the Classic deployment model.
-
-You can request vCPU quota increases for the Classic deployment model in the Azure portal by using **Help + support** or **Usage + quotas**.
-
-## Request quota increase for the Classic deployment model using Help + support
-
-Follow the instructions below to create a vCPU quota increase request for the Classic deployment model by using **Help + support** in the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com), and [open a new support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
-
-1. For **Issue type**, choose **Service and subscription limits (quotas)**.
-
-1. Select the subscription that needs an increased quota.
-
-1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**. Then select **Next**.
-
- :::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal.":::
-
-1. In the **Problem details** section, select **Enter details**. For deployment model, select **Classic**, then select a location.
-
-1. For **SKU family**, select one or more SKU families to increase.
-
-1. Enter the new limits you would like on the subscription. When you're finished, select **Save and continue** to continue creating your support request.
-
-1. Complete the rest of the **Additional information** screen, and then select **Next**.
-
-1. On the **Review + create** screen, review the details that you'll send to support, and then select **Create**.
-
-## Request quota increase for the Classic deployment model from Usage + quotas
-
-Follow the instructions below to create a vCPU quota increase request for the Classic deployment model from **Usage + quotas** in the Azure portal.
-
-1. From https://portal.azure.com, search for and select **Subscriptions**.
-
-1. Select the subscription that needs an increased quota.
-
-1. Select **Usage + quotas**.
-
-1. In the upper right corner, select **Request increase**.
-
-1. Follow the steps above (starting at step 4) to complete your request.
-
-## Next steps
--- Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).-- Learn about the advantages of using the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
reliability Sovereign Cloud China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md
This section outlines variations and considerations when using Azure Container A
||--|| | Azure Monitor| The Azure Monitor integration is not supported in Azure China |
+## Azure in China Account Sign in
+
+The table below lists ways to connect to your Azure account in Azure Global vs. Azure in China.
++
+| Sign in description | Azure Global | Azure in China |
+|--|--||
+| Sign into Azure with an authenticated account for use with Azure Resource Manager| Connect-AzureAccount | Connect-AzureAccount -Environment AzureChinaCloud|
+| Sign into Azure Active Directory with Microsoft Graph PowerShell | Connect-MgGraph | Connect-MgGraph -AzureEnvironment China|
+| Sign into your Azure classic portal account | Add-AzureAccount | Add-AzureAccount -Environment AzureChinaCloud |
## Azure in China REST endpoints
-The table below lists API endpoints in Azure vs. Azure in China for accessing and managing some of the more common services.
+The table below lists API endpoints in Azure Global vs. Azure in China for accessing and managing some of the more common services.
For IP rangers for Azure in China, download [Azure Datacenter IP Ranges in China](https://www.microsoft.com/download/confirmation.aspx?id=57062).
For IP rangers for Azure in China, download [Azure Datacenter IP Ranges in China
| Azure Cognitive Services | `https://api.projectoxford.ai/face/v1.0` | `https://api.cognitive.azure.cn/face/v1.0` | | Azure Bot Services | <\*.botframework.com> | <\*.botframework.azure.cn> | | Azure Key Vault API | \*.vault.azure.net | \*.vault.azure.cn |
-| Sign in with PowerShell: <br>- Azure classic portal <br>- Azure Resource Manager <br>- Azure AD| - Add-AzureAccount<br>- Connect-AzureRmAccount <br> - Connect-msolservice |  - Add-AzureAccount -Environment AzureChinaCloud <br> - Connect-AzureRmAccount -Environment AzureChinaCloud <br>- Connect-msolservice -AzureEnvironment AzureChinaCloud |
| Azure Container Apps Default Domain | \*.azurecontainerapps.io | No default domain is provided for external environment. The [custom domain](/azure/container-apps/custom-domains-certificates) is required. | | Azure Container Apps Event Stream Endpoint | \<region\>.azurecontainerapps.dev | \<region\>.chinanorth3.azurecontainerapps-dev.cn |
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
By default the SAP System deployment uses the credentials from the SAP Workload
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |
-> | `azure_files_storage_account_id` | If provided the Azure resource ID of the storage account for Azure Files | Optional |
+> | `azure_files_storage_account_id` | If provided the Azure resource ID of the storage account used for sapmnt | Optional |
### Azure NetApp Files Support
The table below contains the Terraform parameters, these parameters need to be
The high availability configuration for the database tier and the SCS tier is configured using the `database_high_availability` and `scs_high_availability` flags.
-High availability configurations use Pacemaker with Azure fencing agents. The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create Fencing Agent](../../virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device)
+High availability configurations use Pacemaker with Azure fencing agents.
+
+> [!NOTE]
+> The highly available Central Services deployment requires using a shared file system for sap_mnt. This can be achieved by using Azure Files or Azure NetApp Files, using the NFS_provider attribute. The default is Azure Files. To use Azure NetApp Files, set the NFS_provider attribute to ANF.
+
+
+### Fencing agent configuration
+
+SDAF supports using either managed identities or service principals for fencing agents. The following section describe how to configure each option.
+
+By defining the variable 'use_msi_for_clusters' to true the fencing agent will use managed identities. This is the recommended option.
+
+If you want to use a service principal for the fencing agent set that variable to false.
+
+The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create Fencing Agent](../../virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device)
```azurecli-interactive az ad sp create-for-rbac --role="Linux Fence Agent Role" --scopes="/subscriptions/<subscriptionID>" --name="<prefix>-Fencing-Agent"
sap Configure Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-webapp.md
# Configure the Control Plane Web Application
-As a part of the SAP automation framework control plane, you can optionally create an interactive web application that will assist you in creating the required configuration files and deploying SAP workload zones and systems using Azure Pipelines.
+As a part of the SAP automation framework control plane, you can optionally create an interactive web application that assists you in creating the required configuration files and deploying SAP workload zones and systems using Azure Pipelines.
:::image type="content" source="./media/deployment-framework/webapp-front-page.png" alt-text="Web app front page":::
For full instructions on setting up the web app using the Azure CLI, see [Deploy
5. Configure the application settings. 6. (Optionally) add an additional access policy to the app service.
+## Accessing the web app
+
+By default there's no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, find the web app. Then under settings on the left hand side, select networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](../../app-service/app-service-ip-restrictions.md).
+
+You'll also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality won't work.
+
+You can sign in and visit the web app by following the URL from earlier or selecting browse inside the app service resource. With the web app, you are able to configure SAP workload zones and system infrastructure. Select download to obtain a parameter file of the workload zone or system you specified, for use in the later deployment steps.
+ ## Using the web app
If deploying using the Azure CLI, you can download the parameter file for any la
4. Next to the file you would like to convert to a workload zone or system object, click "Convert". 5. The workload zone or system object will appear in its respective tab.
-### Deploying a workload zone or system object (Azure DevOps Pipelines deployment)
+### Deploying a workload zone or system object (Azure Pipelines deployment)
1. Navigate to the Workload zones or Systems tab. 2. Next to the workload zone or system you would like to deploy, click "Deploy". * If you would like to deploy a file, first convert it to a workload zone or system object.
sap Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md
az role assignment create --assignee <appId> --role "User Access Administrator"
## Prepare the webapp
-This step is optional. If you would like a browser-based UX to assist in the configuration of SAP workload zones and systems, run the following commands before deploying the control plane.
+This step is optional. If you would like a browser-based UX to help the configuration of SAP workload zones and systems, run the following commands before deploying the control plane.
# [Linux](#tab/linux)
$region_code="WEEU"
$env:TF_VAR_app_registration_app_id = (az ad app create ` --display-name $region_code-webapp-registration `
- --enable-id-token-issuance true `
- --sign-in-audience AzureADMyOrg `
--required-resource-accesses ./manifest.json ` --query "appId").Replace('"',"")
del manifest.json
# [Azure DevOps](#tab/devops)
-It is currently not possible to perform this action from Azure DevOps.
+It's currently not possible to perform this action from Azure DevOps.
## Deploy the control plane
-The sample Deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder.
+The sample Deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder.
-The sample SAP Library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder.
+The sample SAP Library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder.
-Running the command below will create the Deployer, the SAP Library and add the Service Principal details to the deployment key vault. If you followed the web app setup in the step above, this command will also create the infrastructure to host the application.
+Running the following command creates the Deployer, the SAP Library and adding the Service Principal details to the deployment key vault. If you followed the web app setup in the step above, this command will also create the infrastructure to host the application.
# [Linux](#tab/linux) You can copy the sample configuration files to start testing the deployment automation framework.
-```bash
-cd ~/Azure_SAP_Automated_Deployment
-
-cp -Rp sap-automation/samples/WORKSPACES WORKSPACES
-
-```
- Run the following command to deploy the control plane: ```bash az logout az login
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
+cd ~/Azure_SAP_Automated_Deployment/samples/WORKSPACES
export subscriptionId="<subscriptionId>" export spn_id="<appId>" export spn_secret="<password>" export tenant_id="<tenantId>" export env_code="MGMT"
- export region_code="<region_code>"
+ export region_code="WEEU"
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export ARM_SUBSCRIPTION_ID="${subscriptionId}"
+ export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES"
+ export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
- ${DEPLOYMENT_REPO_PATH}/deploy/scripts/prepare_region.sh \
- --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-DEP00-INFRASTRUCTURE/${env_code}-${region_code}-DEP00-INFRASTRUCTURE.tfvars \
- --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars \
- --subscription "${subscriptionId}" \
- --spn_id "${spn_id}" \
- --spn_secret "${spn_secret}" \
- --tenant_id "${tenant_id}" \
+
+ ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
+ --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-DEP00-INFRASTRUCTURE/${env_code}-${region_code}-DEP00-INFRASTRUCTURE.tfvars \
+ --library_parameter_file "LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars" \
+ --subscription "${subscriptionId}" \
+ --spn_id "${spn_id}" \
+ --spn_secret "${spn_secret}" \
+ --tenant_id "${tenant_id}" \
--auto-approve ``` # [Windows](#tab/windows)
-You can copy the sample configuration files to start testing the deployment automation framework.
-
-```powershell
-
-cd C:\Azure_SAP_Automated_Deployment
-
-xcopy /E sap-automation\samples\WORKSPACES WORKSPACES
-
-```
--
-```powershell
--
-$subscription="<subscriptionID>"
-$appId="<appID>"
-$spn_secret="<password>"
-$tenant_id="<tenant>"
-
-cd C:\Azure_SAP_Automated_Deployment\WORKSPACES
-
-New-SAPAutomationRegion -DeployerParameterfile .\DEPLOYER\MGMT-WEEU-DEP00-INFRASTRUCTURE\MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars -LibraryParameterfile .\LIBRARY\MGMT-WEEU-SAP_LIBRARY\MGMT-WEEU-SAP_LIBRARY.tfvars -Subscription $subscription -SPN_id $appId -SPN_password $spn_secret -Tenant_id $tenant_id
-```
--
-> [!NOTE]
-> Be sure to replace the sample value `<subscriptionID>` with your subscription ID.
-> Replace the `<appID>`, `<password>`, `<tenant>` values with the output values of the SPN creation
-
+You can't perform this action from Windows
# [Azure DevOps](#tab/devops) Open (https://dev.azure.com) and go to your Azure DevOps project.
Open (https://dev.azure.com) and go to your Azure DevOps project.
> [!NOTE] > Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
-The deployment will use the configuration defined in the Terraform variable files located in the 'samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE' and 'samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY' folders.
+The deployment uses the configuration defined in the Terraform variable files located in the 'WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE' and 'WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY' folders.
Run the pipeline by selecting the _Deploy control plane_ pipeline from the Pipelines section. Enter the configuration names for the deployer and the SAP library. Use 'MGMT-WEEU-DEP00-INFRASTRUCTURE' as the Deployer configuration name and 'MGMT-WEEU-SAP_LIBRARY' as the SAP Library configuration name.
Connect to the deployer by following these steps:
Run the following script to configure the deployer. ```bash
-mkdir -p ~/Azure_SAP_Automated_Deployment
-cd ~/Azure_SAP_Automated_Deployment
+mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
+
+git clone https://github.com/Azure/sap-automation-bootstrap.git config
+
+git clone https://github.com/Azure/sap-automation.git sap-automation
-git clone https://github.com/Azure/sap-automation.git
+git clone https://github.com/Azure/sap-automation-samples.git samples
cd sap-automation/deploy/scripts ./configure_deployer.sh ```
-The script will install Terraform and Ansible and configure the deployer.
+The script installs Terraform and Ansible and configure the deployer.
### Manually configure the deployer
Connect to the deployer by following these steps:
1. Save the file. If you're prompted to **Save as type**, select **All files** if **SSH** isn't an option. For example, use `deployer.ssh`.
-1. Connect to the deployer VM through any SSH client such as VSCode. Use the private IP address of the deployer, and the SSH key you downloaded. For instructions on how to connect to the Deployer using VSCode see [Connecting to Deployer using VSCode](tools-configuration.md#configuring-visual-studio-code). If you're using PuTTY, convert the SSH key file first using PuTTYGen.
+1. Connect to the deployer VM through any SSH client such as Visual Studio Code. Use the private IP address of the deployer, and the SSH key you downloaded. For instructions on how to connect to the Deployer using Visual Studio Code see [Connecting to Deployer using Visual Studio Code](tools-configuration.md#configuring-visual-studio-code). If you're using PuTTY, convert the SSH key file first using PuTTYGen.
> [!NOTE] >The default username is *azureadm*
Configure the deployer using the following script:
```bash
-mkdir -p ~/Azure_SAP_Automated_Deployment
+mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
-cd ~/Azure_SAP_Automated_Deployment
+git clone https://github.com/Azure/sap-automation-bootstrap.git config
-git clone https://github.com/Azure/sap-automation.git
+git clone https://github.com/Azure/sap-automation.git sap-automation
+
+git clone https://github.com/Azure/sap-automation-samples.git samples
cd sap-automation/deploy/scripts ./configure_deployer.sh ```
-The script will install Terraform and Ansible and configure the deployer.
--
-## Deploy the Control Plane Web Application
-
-> [!IMPORTANT]
-> Control Plane Web Application is currently in PREVIEW and not yet available in the main branch.
-
-If you would like to use the web app, follow the steps below. If not, ignore this section.
-
-The web app resource can be found in the deployer resource group. In the Azure portal, select resource groups in your subscription. The deployer resource group will be named something like MGMT-[region]-DEP00-INFRASTRUCTURE. Inside the deployer resource group, locate the app service, named something like mgmt-[region]-dep00-sapdeployment123. Open the app service and copy the URL listed. It should be in the format of https://mgmt-[region]-dep00-sapdeployment123.azurewebsites.net. This will be the value for webapp_url below.
-
-The following commands will configure the application urls, generate a zip file of the web app code, deploy the software to the app service, and configure the application settings.
-
-# [Linux](#tab/linux)
-
-```bash
-
-webapp_url=<webapp_url>
-az ad app update \
- --id $TF_VAR_app_registration_app_id \
- --web-home-page-url ${webapp_url} \
- --web-redirect-uris ${webapp_url}/ ${webapp_url}/.auth/login/aad/callback
-
-```
-# [Windows](#tab/windows)
-
-```powershell
-
-$webapp_url="<webapp_url>"
-az ad app update `
- --id $TF_VAR_app_registration_app_id `
- --web-home-page-url $webapp_url `
- --web-redirect-uris $webapp_url/ $webapp_url/.auth/login/aad/callback
-
-```
-# [Azure DevOps](#tab/devops)
-
-It is currently not possible to perform this action from Azure DevOps.
---
-> [!TIP]
-> Perform the following task from the deployer.
-```bash
-
-cd ~/Azure_SAP_Automated_Deployment/sap-automation/Webapp/AutomationForm
-
-dotnet build
-dotnet publish --configuration Release
-
-cd bin/Release/netcoreapp3.1/publish/
-
-sudo apt install zip
-zip -r deploymentfile.zip .
-
-az webapp deploy --resource-group <group-name> --name <app-name> --src-path deploymentfile.zip
-
-```
-```bash
-
-az webapp config appsettings set -g <group-name> -n <app-name> --settings \
-IS_PIPELINE_DEPLOYMENT=false
-
-```
--
-## Accessing the web app
-
-By default there will be no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, find the web app. Then under settings on the left hand side, click on networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](../../app-service/app-service-ip-restrictions.md).
-
-You will also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality will not work.
+The script installs Terraform and Ansible and configure the deployer.
-You can log in and visit the web app by following the URL from earlier or clicking browse inside the app service resource. With the web app, you are able to configure SAP workload zones and system infrastructure. Click download to obtain a parameter file of the workload zone or system you specified, for use in the later deployment steps.
## Next step
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
Clone the repository and prepare the execution environment by using the followin
- Create a directory called `Azure_SAP_Automated_Deployment` for your automation framework deployment. ```bash
-mkdir ~/Azure_SAP_Automated_Deployment/config; cd $_
-git clone https://github.com/Azure/sap-automation-bootstrap.git
+mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
+git clone https://github.com/Azure/sap-automation-bootstrap.git config
-mkdir ~/Azure_SAP_Automated_Deployment/sap-automation; cd $_
-git clone https://github.com/Azure/sap-automation.git
+git clone https://github.com/Azure/sap-automation.git sap-automation
-mkdir ~/Azure_SAP_Automated_Deployment/samples; cd $_
-git clone https://github.com/Azure/sap-automation-samples.git
+git clone https://github.com/Azure/sap-automation-samples.git samples
```
sap Hana Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-get-started.md
# Installation of SAP HANA on Azure virtual machines ## Introduction
-This guide helps you to point to the right resources to deploy HANA in Azure virtual machines successfully. This guide is going to point you to documentation resources that you need to check before installing SAP HANA in an Azure VM. So, that you are able to perform the right steps to end with a supported configuration of SAP HANA in Azure VMs.
+This document helps in pointing you to the right resources for deploying HANA on Azure virtual machines, including documents that you need to check before installing SAP HANA on Azure VMs. The aim is to ensure you are able to perform the right steps to achieve a supported configuration of SAP HANA on Azure.
> [!NOTE]
-> This guide describes deployments of SAP HANA into Azure VMs. For information on how to deploy SAP HANA into HANA large instances, see [How to install and configure SAP HANA (Large Instances) on Azure](../../virtual-machines/workloads/sap/hana-installation.md).
+> This guide describes deployments of SAP HANA into Azure VMs. For information on how to deploy SAP HANA on HANA large instances, see [How to install and configure SAP HANA (Large Instances) on Azure](../../virtual-machines/workloads/sap/hana-installation.md).
## Prerequisites This guide also assumes that you're familiar with:
This guide also assumes that you're familiar with:
* High availability concepts for SAP HANA as documented in [SAP HANA high availability for Azure virtual machines](./sap-hana-availability-overview.md) ## Step-by-step before deploying
-In this section, the different steps are listed that you need to perform before starting with the installation of SAP HANA in an Azure virtual machine. The order is enumerated and as such should be followed through as enumerated:
+In this section, the different steps are listed that you need to perform before starting with the installation of SAP HANA in an Azure virtual machine. The order is enumerated and as such should be followed in the order listed:
-1. Not all possible deployment scenarios are supported on Azure. Therefore, you should check the document [SAP workload on Azure virtual machine supported scenarios](./planning-supported-configurations.md) for the scenario you have in mind with your SAP HANA deployment. If the scenario is not listed, you need to assume that it has not been tested and, as a result, is not supported
-2. Assuming that you have a rough idea on your memory requirement for your SAP HANA deployment, you need to find a fitting Azure VM. Not all the VMs that are certified for SAP NetWeaver, as documented in [SAP support note #1928533](https://launchpad.support.sap.com/#/notes/1928533), are SAP HANA certified. The source of truth for SAP HANA certified Azure VMs is the website [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). The units starting with **S** are [HANA Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) units and not Azure VMs.
-3. Different Azure VM types have different minimum operating system releases for SUSE Linux or Red Hat Linux. On the website [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120), you need to click on an entry in the list of SAP HANA certified units to get detailed data of this unit. Besides the supported HANA workload, the OS releases that are supported with those units for SAP HANA are listed
+1. Although technically possible, some deployment scenarios will not be supported on Azure. Therefore, you should check the document [SAP workload on Azure virtual machine supported scenarios](./planning-supported-configurations.md) for the scenario you have in mind with your SAP HANA deployment. If the scenario is not listed, you need to assume that it has not been tested and, as a result, is not supported.
+2. Assuming that you have a rough idea of the memory requirement for your SAP HANA deployment, you need to find a suitable Azure VM. Not all the VMs that are certified for SAP NetWeaver, as documented in [SAP support note #1928533](https://launchpad.support.sap.com/#/notes/1928533), are SAP HANA certified. The source of truth for SAP HANA certified Azure VMs is the website [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). The units starting with **S** are [HANA Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) units and not Azure VMs.
+3. Different Azure VM types have different minimum operating system releases for SUSE Linux or Red Hat Linux. On the website [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120), you need to click on an entry in the list of SAP HANA certified units to get detailed data of this unit. Besides the supported HANA workload, the OS releases that are supported with those units for SAP HANA are listed.
4. As of operating system releases, you need to consider certain minimum kernel releases. These minimum releases are documented in these SAP support notes: - [SAP support note #2814271 SAP HANA Backup fails on Azure with Checksum Error](https://launchpad.support.sap.com/#/notes/2814271) - [SAP support note #2753418 Potential Performance Degradation Due to Timer Fallback](https://launchpad.support.sap.com/#/notes/2753418) - [SAP support note #2791572 Performance Degradation Because of Missing VDSO Support For Hyper-V in Azure](https://launchpad.support.sap.com/#/notes/2791572) 4. Based on the OS release that is supported for the virtual machine type of choice, you need to check whether your desired SAP HANA release is supported with that operating system release. Read [SAP support note #2235581](https://launchpad.support.sap.com/#/notes/2235581) for a support matrix of SAP HANA releases with the different Operating System releases.
-5. As you might have found a valid combination of Azure VM type, operating system release and SAP HANA release, you need to check in the SAP Product Availability Matrix. In the SAP Availability Matrix, you can find out whether the SAP product you want to run against your SAP HANA database is supported.
+5. When you have found a valid combination of Azure VM type, operating system release and SAP HANA release, you will need to check the SAP Product Availability Matrix. In the SAP Availability Matrix, you can verify whether the SAP product you want to run against your SAP HANA database is supported.
## Step-by-step VM deployment and guest OS considerations In this phase, you need to go through the steps deploying the VM(s) to install HANA and eventually optimize the chosen operating system after the installation.
-1. Chose the base image out of the Azure gallery. If you want to build your own operating system image for SAP HANA, you need to know all the different packages that are necessary for a successful SAP HANA installation. Otherwise it is recommended using the SUSE and Red Hat images for SAP or SAP HANA out of the gallery. These images include the packages necessary for a successful HANA installation. Based on your support contract with the operating system provider, you need to choose an image where you bring your own license. Or you choose an OS image that includes support
-2. If you chose a guest OS image that requires you bringing your own license, you need to register the OS image with your subscription, so, that you can download and apply the latest patches. This step is going to require public internet access. Unless you set up your private instance of, for example, an SMT server in Azure.
-3. Decide the network configuration of the VM. You can read more information in the document [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md). Keep in mind that there are no network throughput quotas you can assign to virtual network cards in Azure. As a result, the only purpose of directing traffic through different vNICs is based on security considerations. We trust you to find a supportable compromise between complexity of traffic routing through multiple vNICs and the requirements enforced by security aspects.
+1. Choose the base image from the Azure gallery. If you want to build your own operating system image for SAP HANA, you need to know all the different packages that are necessary for a successful SAP HANA installation. Otherwise it is recommended using the SUSE and Red Hat images for SAP or SAP HANA out of the gallery. These images include the packages necessary for a successful HANA installation. Based on your support contract with the operating system provider, you need to choose an image where you bring your own license, or choose an OS image that includes support.
+2. If you choose a guest OS image that requires you to bring your own license, you will need to register this OS image with your subscription to enable you to download and apply the latest patches. This step is going to require public internet access, unless you set up your private instance of, for example, an SMT server in Azure.
+3. Decide the network configuration of the VM. You can get more information in the document [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md). Keep in mind that there are no network throughput quotas you can assign to virtual network cards in Azure. As a result, the only purpose of directing traffic through different vNICs is based on security considerations. We trust you to find a supportable compromise between complexity of traffic routing through multiple vNICs and the requirements enforced by security aspects.
3. Apply the latest patches to the operating system once the VM is deployed and registered. Registered either with your own subscription. Or in case you chose an image that includes operating system support the VM should have access to the patches already.
-4. Apply the tunes necessary for SAP HANA. These tunes are listed in these SAP support notes:
+4. Apply the tunings necessary for SAP HANA. These tunings are listed in the following SAP support notes:
- [SAP support note #2694118 - Red Hat Enterprise Linux HA Add-On on Azure](https://launchpad.support.sap.com/#/notes/2694118) - [SAP support note #1984787 - SUSE LINUX Enterprise Server 12: Installation notes](https://launchpad.support.sap.com/#/notes/1984787)
In this phase, you need to go through the steps deploying the VM(s) to install H
- [SAP support note #2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/0002455582) - [SAP support note #2382421 - Optimizing the Network Configuration on HANA- and OS-Level](https://launchpad.support.sap.com/#/notes/2382421)
-1. Select the Azure storage type for SAP HANA. In this step, you need to decide on storage layout for SAP HANA installation. You are going to use either attached Azure disks or native Azure NFS shares. The Azure storage types that or supported and combinations of different Azure storage types that can be used, are documented in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md). Take the configurations documented as starting point. For non-production systems, you might be able to configure lower throughput or IOPS. For production purposes, you might need to configure a bit more throughput and IOPS.
-2. Make sure that you configured [Azure Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md) for your volumes that contain the DBMS transaction logs or redo logs when you are using M-Series or Mv2-Series VMs. Be aware of the limitations for Write Accelerator as documented.
-2. Check whether [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is enabled on the VM(s) deployed.
+1. Select the Azure storage type and storage layout for the SAP HANA installation. You are going to use either attached Azure disks or native Azure NFS shares. The Azure storage types that are supported and the combinations of different Azure storage types that can be used are documented in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md). Take the configurations documented as starting point. For non-production systems, you might be able to configure lower throughput or IOPS. For production systems, you might need to increase the throughput and IOPS.
+2. Make sure you have configured [Azure Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md) for your volumes that contain the DBMS transaction logs or redo logs when using M-Series or Mv2-Series VMs. Be aware of the limitations for Write Accelerator as documented.
+2. Check whether [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is enabled on the VMs deployed.
> [!NOTE] > Not all the commands in the different sap-tune profiles or as described in the notes might run successfully on Azure. Commands that would manipulate the power mode of VMs usually return with an error since the power mode of the underlying Azure host hardware can not be manipulated. ## Step-by-step preparations specific to Azure virtual machines
-One of the Azure specifics is the installation of an Azure VM extension that delivers monitoring data for the SAP Host Agent. The details about the installation of this monitoring extension are documented in:
+One of the Azure-specific preparations is the installation of an Azure VM extension that delivers monitoring data for the SAP Host Agent. The details about the installation of this monitoring extension are documented in:
- [SAP Note 2191498](https://launchpad.support.sap.com/#/notes/2191498/E) discusses SAP enhanced monitoring with Linux VMs on Azure - [SAP Note 1102124](https://launchpad.support.sap.com/#/notes/1102124/E) discusses information about SAPOSCOL on Linux
One of the Azure specifics is the installation of an Azure VM extension that del
- [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md#d98edcd3-f2a1-49f7-b26a-07448ceb60ca) ## SAP HANA installation
-With the Azure virtual machines deployed and the operating systems registered and configured, you can install SAP HANA according to the SAP install. As a good start to get to this documentation, start with this SAP website [HANA resources](https://www.sap.com/products/s4hana-erp.html?btp=9d3e6f82-d8ab-4122-8d2d-bf4971217afd)
+With the Azure virtual machines deployed and the operating systems registered and configured, you can install SAP HANA according to the SAP install instructions. A good starting point is this SAP website: [HANA resources](https://www.sap.com/products/s4hana-erp.html?btp=9d3e6f82-d8ab-4122-8d2d-bf4971217afd)
For SAP HANA scale-out configurations using direct attached disks of Azure Premium Storage or Ultra disk, read the specifics in the document [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md#configuring-azure-infrastructure-for-sap-hana-scale-out)
For information on how to back up SAP HANA databases on Azure VMs, see:
Read the documentation: - [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)-- [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
+- [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
search Search Howto Managed Identities Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-sql.md
Follow the below steps to assign the search service or user-assigned managed ide
Include the brackets around your search service name or user-assigned managed identity name.
- ```tsql
+ ```sql
CREATE USER [insert your search service name here or user-assigned managed identity name] FROM EXTERNAL PROVIDER; EXEC sp_addrolemember 'db_datareader', [insert your search service name here or user-assigned managed identity name]; ```
Follow the below steps to assign the search service or user-assigned managed ide
If you later change the search service identity or user-assigned identity after assigning permissions, you must remove the role membership and remove the user in the SQL database, then repeat the permission assignment. Removing the role membership and user can be accomplished by running the following commands:
- ```tsql
+ ```sql
sp_droprolemember 'db_datareader', [insert your search service name or user-assigned managed identity name]; DROP USER IF EXISTS [insert your search service name or user-assigned managed identity name];
search Search Index Azure Sql Managed Instance With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-index-azure-sql-managed-instance-with-managed-identity.md
Follow these steps to assign the search service system managed identity permissi
4. In the T-SQL window, copy the following commands and include the brackets around your search service name. Click on **Execute**.
- ```tsql
+ ```sql
CREATE USER [insert your search service name here or user-assigned managed identity name] FROM EXTERNAL PROVIDER; EXEC sp_addrolemember 'db_datareader', [insert your search service name here or user-assigned managed identity name]; ```
Follow these steps to assign the search service system managed identity permissi
If you later change the search service system identity after assigning permissions, you must remove the role membership and remove the user in the SQL database, then repeat the permission assignment. Removing the role membership and user can be accomplished by running the following commands:
- ```tsql
+ ```sql
sp_droprolemember 'db_datareader', [insert your search service name or user-assigned managed identity name]; DROP USER IF EXISTS [insert your search service name or user-assigned managed identity name];
sentinel Indicators Bulk File Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/indicators-bulk-file-import.md
Here's an example ipv4-addr indicator using the JSON template.
"lang": "", "external_references": [], "object_marking_refs": [],
- "granular_markings": [],
+ "granular_markings": []
} ] ```
sentinel Sap Audit Log Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-audit-log-workbook.md
Title: Microsoft Sentinel solution for SAP® applications - SAP audit log workbook overview
-description: Learn about the SAP audit log workbook, used to monitor and track data across your SAP systems.
+ Title: Microsoft Sentinel solution for SAP® applications - SAP -Security Audit log and Initial Access workbook overview
+description: Learn about the SAP -Security Audit log and Initial Access workbook, used to monitor and track data across your SAP systems.
Last updated 01/23/2023
-# Microsoft Sentinel solution for SAP® applications - SAP audit log workbook
+# Microsoft Sentinel solution for SAP® applications - SAP -Security Audit log and Initial Access workbook
-This article describes the SAP Audit workbook, used for monitoring and tracking user audit activity across your SAP systems. You can use the workbook to get a bird's eye view of user audit activity, to better secure your SAP systems and gain quick visibility into suspicious actions. You can drill down into suspicious events as needed.
+This article describes the SAP -Security Audit log and Initial Access workbook, used for monitoring and tracking user audit activity across your SAP systems. You can use the workbook to get a bird's eye view of user audit activity, to better secure your SAP systems and gain quick visibility into suspicious actions. You can drill down into suspicious events as needed.
You can use the workbook either for ongoing monitoring of your SAP systems, or to review the systems following a security incident or other suspicious activity.
You can use the workbook either for ongoing monitoring of your SAP systems, or t
1. From the Microsoft Sentinel portal, select **Workbooks** from the **Threat management** menu.
-1. In the **Workbooks** gallery, enter *SAP audit* in the search bar, and select **SAP Audit** from among the results.
+1. In the **Workbooks** gallery, go to **Templates** and enter *SAP* in the search bar, and select **SAP -Security Audit log and Initial Access** from among the results.
1. Select **View template** to use the workbook as is, or select **Save** to create an editable copy of the workbook. When the copy is created, select **View saved workbook**.
- :::image type="content" source="media/sap-audit-log-workbook/workbook-overview.png" alt-text="Screenshot of the top of the SAP Audit workbook." lightbox="media/sap-audit-log-workbook/workbook-overview.png":::
+ :::image type="content" source="media/sap-audit-log-workbook/workbook-overview.png" alt-text="Screenshot of the top of the SAP -Security Audit log and Initial Access workbook." lightbox="media/sap-audit-log-workbook/workbook-overview.png":::
> [!IMPORTANT] >
- > The SAP Audit workbook is hosted by the workspace where the Microsoft Sentinel solution for SAP® applications were installed. By default, both the SAP and the SOC data is assumed to be on the workspace that hosts the workbook.
+ > The SAP -Security Audit log and Initial Access workbook is hosted by the workspace where the Microsoft Sentinel solution for SAP® applications were installed. By default, both the SAP and the SOC data is assumed to be on the workspace that hosts the workbook.
> > If the SOC data is on a different workspace than the workspace hosting the workbook, make sure to include the subscription for that workspace, and select the SOC workspace from **Azure audit and activity workspace**.
For more information, see:
- [Deploy the Microsoft Sentinel solution for SAP® applications data connector with SNC](configure-snc.md) - [Configuration file reference](configuration-file-reference.md) - [Prerequisites for deploying the Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)-- [Troubleshooting your Microsoft Sentinel solution for SAP® applications deployment](sap-deploy-troubleshoot.md)
+- [Troubleshooting your Microsoft Sentinel solution for SAP® applications deployment](sap-deploy-troubleshoot.md)
service-bus-messaging Jms Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/jms-developer-guide.md
Title: Azure Service Bus JMS 2.0 developer guide description: How to use the Java Message Service (JMS) 2.0 API to communicate with Azure Service Bus Previously updated : 02/12/2022 Last updated : 05/02/2023 # Azure Service Bus JMS 2.0 developer guide
A session can be created from the connection object as shown below.
Session session = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE); ```
+> [!NOTE]
+> JMS API doesn't support receiving messages from service bus queues or topics with messaging sessions enabled.
+ #### Session modes A session can be created with any of the below modes.
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md
Title: Azure Service Bus premium and standard tiers
description: This article describes standard and premium tiers of Azure Service Bus. Compares these tiers and provides technical differences. Previously updated : 10/12/2022 Last updated : 05/02/2023 # Service Bus Premium and Standard messaging tiers
Azure Service Bus premium tier namespaces support the ability to send large mess
Here are some considerations when sending large messages on Azure Service Bus - * Supported on Azure Service Bus premium tier namespaces only.
- * Supported only when using the AMQP protocol. Not supported when using the SBMP protocol.
+ * Supported only when using the AMQP protocol. Not supported when using SBMP or HTTP protocols.
* Supported when using [Java Message Service (JMS) 2.0 client SDK](how-to-use-java-message-service-20.md) and other language client SDKs. * Sending large messages will result in decreased throughput and increased latency. * While 100 MB message payloads are supported, it's recommended to keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace.
service-bus-messaging Service Bus Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-troubleshooting-guide.md
The following steps may help you with troubleshooting connectivity/certificate/t
</Detail> </Error> ```-- Run the following command to check if any port is blocked on the firewall. Ports used are 443 (HTTPS), 5671 and 5672 (AMQP) and 9354 (Net Messaging/SBMP). Depending on the library you use, other ports are also used. Here is the sample command that check whether the 5671 port is blocked. C
+- Run the following command to check if any port is blocked on the firewall. Ports used are 443 (HTTPS), 5671 and 5672 (AMQP) and 9354 (Net Messaging/SBMP). Depending on the library you use, other ports are also used. Here's the sample command that check whether the 5671 port is blocked. C
```powershell tnc <yournamespacename>.servicebus.windows.net -port 5671
The following steps may help you with troubleshooting connectivity/certificate/t
```shell telnet <yournamespacename>.servicebus.windows.net 5671 ```-- When there are intermittent connectivity issues, run the following command to check if there are any dropped packets. This command will try to establish 25 different TCP connections every 1 second with the service. Then, you can check how many of them succeeded/failed and also see TCP connection latency. You can download the `psping` tool from [here](/sysinternals/downloads/psping).
+- When there are intermittent connectivity issues, run the following command to check if there are any dropped packets. This command tries to establish 25 different TCP connections every 1 second with the service. Then, you can check how many of them succeeded/failed and also see TCP connection latency. You can download the `psping` tool from [here](/sysinternals/downloads/psping).
```shell .\psping.exe -n 25 -i 1 -q <yournamespace>.servicebus.windows.net:5671 -nobanner
The following steps may help you with troubleshooting connectivity/certificate/t
Backend service upgrades and restarts may cause these issues in your applications. ### Resolution
-If the application code uses SDK, the [retry policy](/azure/architecture/best-practices/retry-service-specific#service-bus) is already built in and active. The application will reconnect without significant impact to the application/workflow.
+If the application code uses SDK, the [retry policy](/azure/architecture/best-practices/retry-service-specific#service-bus) is already built in and active. The application reconnects without significant impact to the application/workflow.
## Unauthorized access: Send claims are required
To learn how to assign permissions to roles, see [Authenticate a managed identit
## Service Bus Exception: Put token failed ### Symptoms
-You'll receive the following error message:
+You receive the following error message:
`Microsoft.Azure.ServiceBus.ServiceBusException: Put token failed. status-code: 403, status-description: The maximum number of '1000' tokens per connection has been reached.`
Specify the full Azure Resource Manager ID of the subnet that includes the name
Remove-AzServiceBusVirtualNetworkRule -ResourceGroupName myRG -Namespace myNamespace -SubnetId "/subscriptions/SubscriptionId/resourcegroups/ResourceGroup/myOtherRG/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/mySubnet" ```
+## Resource locks don't work when using the data plane SDK
+
+### Symptoms
+You have configured a delete lock on a Service Bus namespace, but you're able to delete resources in the namespace (queues, topics, etc.) by using the Service Bus Explorer.
+
+### Cause
+Resource lock is preserved in Azure Resource Manager (control plane) and it doesn't prevent the data plane SDK call from deleting the resource directly from the namespace. The standalone Service Bus Explorer uses the data plane SDK, so the deletion goes through.
+
+### Resolution
+We recommend that you use the Azure Resource Manager based API via Azure portal, PowerShell, CLI, or Resource Manager template to delete entities so that the resource lock will prevent the resources from being accidentally deleted.
+ ## Next steps See the following articles:
site-recovery Azure To Azure Replicate After Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-replicate-after-migration.md
description: This article describes how to prepare machines to set up disaster r
Previously updated : 11/14/2019 Last updated : 05/02/2023
site-recovery Site Recovery Deployment Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner.md
Previously updated : 04/06/2022 Last updated : 05/02/2023
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
Previously updated : 03/27/2023 Last updated : 05/02/2023
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
Previously updated : 05/27/2021 Last updated : 05/02/2023
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Previously updated : 03/31/2023 Last updated : 05/02/2023 # Manage the Mobility agent
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Previously updated : 03/31/2023 Last updated : 05/02/2023
storage Access Tiers Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-best-practices.md
description: Learn about best practice guidelines that help you use access tiers
Previously updated : 01/20/2023 Last updated : 05/02/2023
# Best practices for using blob access tiers
-This article provides best practice guidelines that help you use access tiers to optimize performance and reduce costs. To learn more about access tiers, see [Hot, cool, and archive access tiers for blob data](access-tiers-overview.md?tabs=azure-portal).
+This article provides best practice guidelines that help you use access tiers to optimize performance and reduce costs. To learn more about access tiers, see [Access tiers for blob data](access-tiers-overview.md?tabs=azure-portal).
## Choose the most cost-efficient access tiers
storage Access Tiers Online Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md
description: Learn how to specify a blob's access tier when you upload it, or ho
Previously updated : 08/18/2022 Last updated : 05/02/2023
You can set a blob's access tier in any of the following ways: -- By setting the default online access tier (Hot or Cool) for the storage account. Blobs in the account inherit this access tier unless you explicitly override the setting for an individual blob.-- By explicitly setting a blob's tier on upload. You can create a blob in the Hot, Cool, or Archive tier.-- By changing an existing blob's tier with a Set Blob Tier operation, typically to move from a hotter tier to a cooler one.-- By copying a blob with a Copy Blob operation, typically to move from a cooler tier to a hotter one.
+- By setting the default online access tier (hot or cool) for the storage account. Blobs in the account inherit this access tier unless you explicitly override the setting for an individual blob.
+- By explicitly setting a blob's tier on upload. You can create a blob in the hot, cool, cold, or archive tier.
+- By changing an existing blob's tier with a Set Blob Tier operation. Typically, you would use this operation to move from a hotter tier to a cooler one.
+- By copying a blob with a Copy Blob operation. Typically, you would use this operation to move from a cooler tier to a hotter one.
-This article describes how to manage a blob in an online access tier (Hot or Cool). For more information about how to move a blob to the Archive tier, see [Archive a blob](archive-blob.md). For more information about how to rehydrate a blob from the Archive tier, see [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md).
+This article describes how to manage a blob in an online access tier. For more information about how to move a blob to the archive tier, see [Archive a blob](archive-blob.md). For more information about how to rehydrate a blob from the archive tier, see [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md).
-For more information about access tiers for blobs, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+For more information about access tiers for blobs, see [Access tiers for blob data](access-tiers-overview.md).
+
+> [!IMPORTANT]
+> The cold tier is currently in PREVIEW and is available in the following regions: Canada Central, Canada East, France Central, France South and Korea Central.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To enroll, see [Cold tier (preview)](access-tiers-overview.md#cold-tier-preview).
## Set the default access tier for a storage account
When you change the default access tier setting for an existing general-purpose
To set the default access tier for a storage account at create time in the Azure portal, follow these steps: 1. Navigate to the **Storage accounts** page, and select the **Create** button.
-1. Fill out the **Basics** tab.
-1. On the **Advanced** tab, under **Blob storage**, set the **Access tier** to either *Hot* or *Cool*. The default setting is *Hot*.
-1. Select **Review + Create** to validate your settings and create your storage account.
+
+2. Fill out the **Basics** tab.
+
+3. On the **Advanced** tab, under **Blob storage**, set the **Access tier** to either *Hot* or *Cool*. The default setting is *Hot*.
+
+4. Select **Review + Create** to validate your settings and create your storage account.
:::image type="content" source="media/access-tiers-online-manage/set-default-access-tier-create-portal.png" alt-text="Screenshot showing how to set the default access tier when creating a storage account."::: To update the default access tier for an existing storage account in the Azure portal, follow these steps: 1. Navigate to the storage account in the Azure portal.
-1. Under **Settings**, select **Configuration**.
-1. Locate the **Blob access tier (default)** setting, and select either *Hot* or *Cool*. The default setting is *Hot*, if you have not previously set this property.
-1. Save your changes.
+
+2. Under **Settings**, select **Configuration**.
+
+3. Locate the **Blob access tier (default)** setting, and select either *Hot* or *Cool*. The default setting is *Hot*, if you have not previously set this property.
+
+4. Save your changes.
#### [PowerShell](#tab/azure-powershell)
To change the default access tier setting for a storage account with PowerShell,
$rgName = <resource-group> $accountName = <storage-account>
-# Change the storage account tier to Cool
+# Change the storage account tier to cool
Set-AzStorageAccount -ResourceGroupName $rgName -Name $accountName -AccessTier Cool ```
Set-AzStorageAccount -ResourceGroupName $rgName -Name $accountName -AccessTier C
To change the default access tier setting for a storage account with PowerShell, call the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command, specifying the new default access tier. ```azurecli
-# Change the storage account tier to Cool
+# Change the storage account tier to cool
az storage account update \ --resource-group <resource-group> \ --name <storage-account> \
N/A
When you upload a blob to Azure Storage, you have two options for setting the blob's tier on upload: -- You can explicitly specify the tier in which the blob will be created. This setting overrides the default access tier for the storage account. You can set the tier for a blob or set of blobs on upload to Hot, Cool, or Archive.-- You can upload a blob without specifying a tier. In this case, the blob will be created in the default access tier specified for the storage account (either Hot or Cool).
+- You can explicitly specify the tier in which the blob will be created. This setting overrides the default access tier for the storage account. You can set the tier for a blob or set of blobs on upload to hot, cool, cold or archive.
+- You can upload a blob without specifying a tier. In this case, the blob will be created in the default access tier specified for the storage account (either hot or cool).
If you are uploading a new blob that uses an encryption scope, you cannot change the access tier for that blob.
-The following sections describe how to specify that a blob is uploaded to either the Hot or Cool tier. For more information about archiving a blob on upload, see [Archive blobs on upload](archive-blob.md#archive-blobs-on-upload).
+The following sections describe how to specify that a blob is uploaded to either the hot or cool tier. For more information about archiving a blob on upload, see [Archive blobs on upload](archive-blob.md#archive-blobs-on-upload).
### Upload a blob to a specific online tier
-To create a blob in the Hot or Cool tier, specify that tier when you create the blob. The access tier specified on upload overrides the default access tier for the storage account.
+To create a blob in the hot, cool, or cold tier, specify that tier when you create the blob. The access tier specified on upload overrides the default access tier for the storage account.
### [Portal](#tab/azure-portal) To upload a blob or set of blobs to a specific tier from the Azure portal, follow these steps: 1. Navigate to the target container.
-1. Select the **Upload** button.
-1. Select the file or files to upload.
-1. Expand the **Advanced** section, and set the **Access tier** to *Hot* or *Cool*.
-1. Select the **Upload** button.
+
+2. Select the **Upload** button.
+
+3. Select the file or files to upload.
+
+4. Expand the **Advanced** section, and set the **Access tier** to *Hot* or *Cool*.
+
+ > [!NOTE]
+ > The cold tier is in preview and appears as an option if the storage account is in a region that supports the preview.
+
+5. Select the **Upload** button.
:::image type="content" source="media/access-tiers-online-manage/upload-blob-to-online-tier-portal.png" alt-text="Screenshot showing how to upload blobs to an online tier in the Azure portal.":::
To upload a blob or set of blobs to a specific tier with PowerShell, call the [S
$rgName = <resource-group> $storageAccount = <storage-account> $containerName = <container>
+# tier can be hot, cool, cold, or archive
+$tier = <tier>
# Get context object $ctx = New-AzStorageContext -StorageAccountName $storageAccount -UseConnectedAccount
$ctx = New-AzStorageContext -StorageAccountName $storageAccount -UseConnectedAcc
# Create new container. New-AzStorageContainer -Name $containerName -Context $ctx
-# Upload a single file named blob1.txt to the Cool tier.
+# Upload a single file named blob1.txt to the cool tier.
Set-AzStorageBlobContent -Container $containerName ` -File "blob1.txt" ` -Blob "blob1.txt" ` -Context $ctx ` -StandardBlobTier Cool
-# Upload the contents of a sample-blobs directory to the Cool tier, recursively.
+# Upload the contents of a sample-blobs directory to the cool tier, recursively.
Get-ChildItem -Path "C:\sample-blobs" -File -Recurse | Set-AzStorageBlobContent -Container $containerName ` -Context $ctx `
- -StandardBlobTier Cool
+ -StandardBlobTier $tier
``` ### [Azure CLI](#tab/azure-cli)
-To upload a blob to a specific tier with Azure CLI, call the [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) command, as shown in the following example. Remember to replace the placeholder values in brackets with your own values:
+To upload a blob to a specific tier with Azure CLI, call the [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) command, as shown in the following example. Remember to replace the placeholder values in brackets with your own values. Replace the `<tier>` placeholder with `hot`, `cool`, `cold`, or `archive`.
```azurecli az storage blob upload \
az storage blob upload \
--container-name <container> \ --name <blob> \ --file <file> \
- --tier Cool \
+ --tier <tier> \
--auth-mode login ```
-To upload a set of blobs to a specific tier with Azure CLI, call the [az storage blob upload-batch](/cli/azure/storage/blob#az-storage-blob-upload-batch) command, as shown in the following example. Remember to replace the placeholder values in brackets with your own values:
+To upload a set of blobs to a specific tier with Azure CLI, call the [az storage blob upload-batch](/cli/azure/storage/blob#az-storage-blob-upload-batch) command, as shown in the following example. Remember to replace the placeholder values in brackets with your own values. Replace the `<tier>` placeholder with `hot`, `cool`, `cold`, or `archive`.
```azurecli az storage blob upload-batch \ --destination <container> \ --source <source-directory> \ --account-name <storage-account> \
- --tier Cool \
+ --tier <tier> \
--auth-mode login ```
azcopy copy '<local-directory-path>\*' 'https://<storage-account-name>.blob.core
Storage accounts have a default access tier setting that indicates in which online tier a new blob is created. The default access tier setting can be set to either hot or cool. The behavior of this setting is slightly different depending on the type of storage account: -- The default access tier for a new general-purpose v2 storage account is set to the Hot tier by default. You can change the default access tier setting when you create a storage account or after it's created.-- When you create a legacy Blob Storage account, you must specify the default access tier setting as Hot or Cool when you create the storage account. You can change the default access tier setting for the storage account after it's created.
+- The default access tier for a new general-purpose v2 storage account is set to the hot tier by default. You can change the default access tier setting when you create a storage account or after it's created.
+- When you create a legacy Blob Storage account, you must specify the default access tier setting as hot or cool when you create the storage account. You can change the default access tier setting for the storage account after it's created.
A blob that doesn't have an explicitly assigned tier infers its tier from the default account access tier setting. You can determine whether a blob's access tier is inferred by using the Azure portal, PowerShell, or Azure CLI.
When you change a blob's tier, you move that blob and all of its data to the tar
#### [Portal](#tab/azure-portal)
-To change a blob's tier from Hot to Cool in the Azure portal, follow these steps:
+To change a blob's tier to a cooler tier in the Azure portal, follow these steps:
1. Navigate to the blob for which you want to change the tier. 1. Select the blob, then select the **Change tier** button.
To change a blob's tier from Hot to Cool in the Azure portal, follow these steps
#### [PowerShell](#tab/azure-powershell)
-To change a blob's tier from Hot to Cool with PowerShell, use the blob's **BlobClient** property to return a .NET reference to the blob, then call the **SetAccessTier** method on that reference. Remember to replace placeholders in angle brackets with your own values:
+To change a blob's tier to a cooler tier with PowerShell, use the blob's **BlobClient** property to return a .NET reference to the blob, then call the **SetAccessTier** method on that reference. Remember to replace placeholders in angle brackets with your own values:
```azurepowershell # Initialize these variables with your values.
$rgName = "<resource-group>"
$accountName = "<storage-account>" $containerName = "<container>" $blobName = "<blob>"
+$tier = "<tier>"
# Get the storage account context $ctx = (Get-AzStorageAccount ` -ResourceGroupName $rgName ` -Name $accountName).Context
-# Change the blob's access tier to Cool.
+# Change the blob's access tier.
$blob = Get-AzStorageBlob -Container $containerName -Blob $blobName -Context $ctx
-$blob.BlobClient.SetAccessTier("Cool", $null, "Standard")
+$blob.BlobClient.SetAccessTier($tier, $null, "Standard")
``` #### [Azure CLI](#tab/azure-cli)
-To change a blob's tier from Hot to Cool with Azure CLI, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command. Remember to replace placeholders in angle brackets with your own values:
+To change a blob's tier to a cooler tier with Azure CLI, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command. Remember to replace placeholders in angle brackets with your own values:
```azurecli az storage blob set-tier \ --account-name <storage-account> \ --container-name <container> \ --name <blob> \
- --tier Cool \
+ --tier <tier> \
--auth-mode login ``` #### [AzCopy](#tab/azcopy)
-To change a blob's tier from Hot to Cool, use the [azcopy set-properties](..\common\storage-ref-azcopy-set-properties.md) command and set the `-block-blob-tier` parameter to `cool`.
+To change a blob's tier to a cooler tier, use the [azcopy set-properties](..\common\storage-ref-azcopy-set-properties.md) command and set the `-block-blob-tier` parameter.
> [!IMPORTANT] > Using AzCopy to change a blob's access tier is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > [!NOTE]
-> This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). <br>This example excludes the SAS tokenn because it assumes that you've provided authorization credentials by using Azure Active Directory (Azure AD). See the [Get started with AzCopy](../common/storage-use-azcopy-v10.md) article to learn about the ways that you can provide authorization credentials to the storage service.
+> This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). <br>This example excludes the SAS token because it assumes that you've provided authorization credentials by using Azure Active Directory (Azure AD). See the [Get started with AzCopy](../common/storage-use-azcopy-v10.md) article to learn about the ways that you can provide authorization credentials to the storage service.
```azcopy
-azcopy set-properties 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<blob-name>' --block-blob-tier=cool
+azcopy set-properties 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<blob-name>' --block-blob-tier=<tier>
```
+> [!NOTE]
+> Setting the `--block-blob-tier` parameter to `cold` is not yet supported. If you want to change a blob's tier to the `cold` tier, [enroll](https://forms.office.com/r/788B1gr3Nq) in the cold tier preview, and then change the blob's tier to cold by using the Azure portal, PowerShell, or the Azure CLI.
+ To change the access tier for all blobs in a virtual directory, refer to the virtual directory name instead of the blob name, and then append `--recursive=true` to the command. ```azcopy
-azcopy set-properties 'https://<storage-account-name>.blob.core.windows.net/<container-name>/myvirtualdirectory' --block-blob-tier=cool --recursive=true
+azcopy set-properties 'https://<storage-account-name>.blob.core.windows.net/<container-name>/myvirtualdirectory' --block-blob-tier=<tier> --recursive=true
``` ### Copy a blob to a different online tier
-Call [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. When you copy a blob to a different tier, you move that blob and all of its data to the target tier. The source blob remains in the original tier, and a new blob is created in the target tier. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you're moving a blob from Cool to Hot, or rehydrating a blob from the Archive tier.
+Call [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. When you copy a blob to a different tier, you move that blob and all of its data to the target tier. The source blob remains in the original tier, and a new blob is created in the target tier. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you're moving a blob to a warmer tier, or rehydrating a blob from the archive tier.
#### [Portal](#tab/azure-portal)
N/A
#### [PowerShell](#tab/azure-powershell)
-To copy a blob to from Cool to Hot with PowerShell, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the target tier. Remember to replace placeholders in angle brackets with your own values:
+To copy a blob to from cool to hot with PowerShell, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the target tier. Remember to replace placeholders in angle brackets with your own values:
```azurepowershell # Initialize these variables with your values.
$ctx = (Get-AzStorageAccount `
-ResourceGroupName $rgName ` -Name $accountName).Context
-# Copy the source blob to a new destination blob in Hot tier.
+# Copy the source blob to a new destination blob in hot tier.
Start-AzStorageBlobCopy -SrcContainer $srcContainerName ` -SrcBlob $srcBlobName ` -DestContainer $destContainerName `
Start-AzStorageBlobCopy -SrcContainer $srcContainerName `
#### [Azure CLI](#tab/azure-cli)
-To copy a blob from Cool to Hot with Azure CLI, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and specify the target tier. Remember to replace placeholders in angle brackets with your own values:
+To copy a blob to a warmer tier with Azure CLI, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and specify the target tier. Remember to replace placeholders in angle brackets with your own values:
```azurecli az storage blob copy start \
az storage blob copy start \
#### [AzCopy](#tab/azcopy)
-To copy a blob from Cool to Hot with AzCopy, use [azcopy copy](..\common\storage-ref-azcopy-copy.md) command and set the `--block-blob-tier` parameter to `hot`.
+To copy a blob from cool to hot with AzCopy, use [azcopy copy](..\common\storage-ref-azcopy-copy.md) command and set the `--block-blob-tier` parameter to `hot`.
> [!NOTE] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). <br>This example excludes the SAS token because it assumes that you've provided authorization credentials by using Azure Active Directory (Azure AD). See the [Get started with AzCopy](../common/storage-use-azcopy-v10.md) article to learn about the ways that you can provide authorization credentials to the storage service.
To copy a blob from Cool to Hot with AzCopy, use [azcopy copy](..\common\storage
azcopy copy 'https://mystorageeaccount.blob.core.windows.net/mysourcecontainer/myTextFile.txt' 'https://mystorageaccount.blob.core.windows.net/mydestinationcontainer/myTextFile.txt' --block-blob-tier=hot ```
-The copy operation is synchronous so when the command returns, that indicates that all files have been copied.
+The copy operation is synchronous so when the command returns, all files are copied.
## Next steps -- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
+- [Access tiers for blob data](access-tiers-overview.md)
- [Archive a blob](archive-blob.md) - [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md)
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
Title: Hot, cool, and archive access tiers for blob data
+ Title: Access tiers for blob data
-description: Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Learn about the hot, cool, and archive access tiers for Blob Storage.
+description: Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Learn about the hot, cool, cold, and archive access tiers for Blob Storage.
Previously updated : 09/23/2022 Last updated : 05/02/2023
-# Hot, cool, and archive access tiers for blob data
+# Access tiers for blob data
Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it can be helpful to organize your data based on how frequently it will be accessed and how long it will be retained. Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Azure Storage access tiers include: - **Hot tier** - An online tier optimized for storing data that is accessed or modified frequently. The hot tier has the highest storage costs, but the lowest access costs.-- **Cool tier** - An online tier optimized for storing data that is infrequently accessed or modified. Data in the cool tier should be stored for a minimum of 30 days. The cool tier has lower storage costs and higher access costs compared to the hot tier.
+- **Cool tier** - An online tier optimized for storing data that is infrequently accessed or modified. Data in the cool tier should be stored for a minimum of **30** days. The cool tier has lower storage costs and higher access costs compared to the hot tier.
+- **Cold tier** - An online tier optimized for storing data that is infrequently accessed or modified. Data in the cold tier should be stored for a minimum of **90** days. The cold tier has lower storage costs and higher access costs compared to the cool tier.
- **Archive tier** - An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the archive tier should be stored for a minimum of 180 days.
+> [!IMPORTANT]
+> The cold tier is currently in PREVIEW and is available in the following regions: Canada Central, Canada East, France Central, France South and Korea Central.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To enroll, see [Cold tier (preview)](#cold-tier-preview).
+ Azure storage capacity limits are set at the account level, rather than according to access tier. You can choose to maximize your capacity usage in one tier, or to distribute capacity across two or more tiers. > [!NOTE]
Azure storage capacity limits are set at the account level, rather than accordin
## Online access tiers
-When your data is stored in an online access tier (either hot or cool), users can access it immediately. The hot tier is the best choice for data that is in active use. The cool tier is ideal for data that is accessed less frequently, but that still must be available for reading and writing.
+When your data is stored in an online access tier (either hot, cool or cold), users can access it immediately. The hot tier is the best choice for data that is in active use. The cool or cold tier is ideal for data that is accessed less frequently, but that still must be available for reading and writing.
Example usage scenarios for the hot tier include: - Data that's in active use or data that you expect will require frequent reads and writes. - Data that's staged for processing and eventual migration to the cool access tier.
-Usage scenarios for the cool access tier include:
+Usage scenarios for the cool and cold access tiers include:
- Short-term data backup and disaster recovery. - Older data sets that aren't used frequently, but are expected to be available for immediate access. - Large data sets that need to be stored in a cost-effective way while other data is being gathered for processing.
-To learn how to move a blob to the hot or cool tier, see [Set a blob's access tier](access-tiers-online-manage.md).
+To learn how to move a blob to the hot, cool, or cold tier, see [Set a blob's access tier](access-tiers-online-manage.md).
-Data in the cool tier has slightly lower availability, but offers the same high durability, retrieval latency, and throughput characteristics as the hot tier. For data in the cool tier, slightly lower availability and higher access costs may be acceptable trade-offs for lower overall storage costs, as compared to the hot tier. For more information, see [SLA for storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
+Data in the cool and cold tiers have slightly lower availability, but offer the same high durability, retrieval latency, and throughput characteristics as the hot tier. For data in the cool or cold tiers, slightly lower availability and higher access costs may be acceptable trade-offs for lower overall storage costs, as compared to the hot tier. For more information, see [SLA for storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
-A blob in the cool tier in a general-purpose v2 account is subject to an early deletion penalty if it's deleted or moved to a different tier before 30 days has elapsed. This charge is prorated. For example, if a blob is moved to the cool tier and then deleted after 21 days, you'll be charged an early deletion fee equivalent to 9 (30 minus 21) days of storing that blob in the cool tier.
+A blob in the cool tier in a general-purpose v2 account is subject to an early deletion penalty if it's deleted or moved to a different tier before 30 days has elapsed. For a blob in the cold tier, the deletion penalty applies if it's deleted or moved to a different tier before 90 days has elapsed. This charge is prorated. For example, if a blob is moved to the cool tier and then deleted after 21 days, you'll be charged an early deletion fee equivalent to 9 (30 minus 21) days of storing that blob in the cool tier.
-The hot and cool tiers support all redundancy configurations. For more information about data redundancy options in Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
+The hot, cool, and cold tiers support all redundancy configurations. For more information about data redundancy options in Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
## Archive access tier
-The archive tier is an offline tier for storing data that is rarely accessed. The archive access tier has the lowest storage cost. However, this tier has higher data retrieval costs with a higher latency as compared to the hot and cool tiers. Example usage scenarios for the archive access tier include:
+The archive tier is an offline tier for storing data that is rarely accessed. The archive access tier has the lowest storage cost. However, this tier has higher data retrieval costs with a higher latency as compared to the hot, cool, and cold tiers. Example usage scenarios for the archive access tier include:
- Long-term backup, secondary backup, and archival datasets - Original (raw) data that must be preserved, even after it has been processed into final usable form
To learn how to move a blob to the archive tier, see [Archive a blob](archive-bl
Data must remain in the archive tier for at least 180 days or be subject to an early deletion charge. For example, if a blob is moved to the archive tier and then deleted or moved to the hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the archive tier.
-While a blob is in the archive tier, it can't be read or modified. To read or download a blob in the archive tier, you must first rehydrate it to an online tier, either hot or cool. Data in the archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
+While a blob is in the archive tier, it can't be read or modified. To read or download a blob in the archive tier, you must first rehydrate it to an online tier, either hot, cool, or cold. Data in the archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
An archived blob's metadata remains available for read access, so that you can list the blob and its properties, metadata, and index tags. Metadata for a blob in the archive tier is read-only, while blob index tags can be read or written. Storage costs for metadata of archived blobs will be charged on Cool tier rates. Snapshots aren't supported for archived blobs.
The following operations are supported for blobs in the archive tier:
Only storage accounts that are configured for LRS, GRS, or RA-GRS support moving blobs to the archive tier. The archive tier isn't supported for ZRS, GZRS, or RA-GZRS accounts. For more information about redundancy configurations for Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
-To change the redundancy configuration for a storage account that contains blobs in the archive tier, you must first rehydrate all archived blobs to the hot or cool tier. Because rehydration operations can be costly and time-consuming, Microsoft recommends that you avoid changing the redundancy configuration of a storage account that contains archived blobs.
+To change the redundancy configuration for a storage account that contains blobs in the archive tier, you must first rehydrate all archived blobs to the hot, cool, or cold tier. Because rehydration operations can be costly and time-consuming, Microsoft recommends that you avoid changing the redundancy configuration of a storage account that contains archived blobs.
Migrating a storage account from LRS to GRS is supported as long as no blobs were moved to the archive tier while the account was configured for LRS. An account can be moved back to GRS if the update is performed less than 30 days from the time the account became LRS, and no blobs were moved to the archive tier while the account was set to LRS.
The default access tier for a new general-purpose v2 storage account is set to t
A blob that doesn't have an explicitly assigned tier infers its tier from the default account access tier setting. If a blob's access tier is inferred from the default account access tier setting, then the Azure portal displays the access tier as **Hot (inferred)** or **Cool (inferred)**.
-Changing the default access tier setting for a storage account applies to all blobs in the account for which an access tier hasn't been explicitly set. If you toggle the default access tier setting from hot to cool in a general-purpose v2 account, then you're charged for write operations (per 10,000) for all blobs for which the access tier is inferred. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from cool to hot in a general-purpose v2 account.
+Changing the default access tier setting for a storage account applies to all blobs in the account for which an access tier hasn't been explicitly set. If you toggle the default access tier setting to a cooler tier in a general-purpose v2 account, then you're charged for write operations (per 10,000) for all blobs for which the access tier is inferred. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle to a warmer tier in a general-purpose v2 account.
-When you create a legacy Blob Storage account, you must specify the default access tier setting as hot or cool at create time. There's no charge for changing the default account access tier setting from hot to cool in a legacy Blob Storage account. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from cool to hot in a Blob Storage account. Microsoft recommends using general-purpose v2 storage accounts rather than Blob Storage accounts when possible.
+When you create a legacy Blob Storage account, you must specify the default access tier setting as hot or cool at create time. There's no charge for changing the default account access tier setting to a cooler tier in a legacy Blob Storage account. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle to a warmer tier in a Blob Storage account. Microsoft recommends using general-purpose v2 storage accounts rather than Blob Storage accounts when possible.
> [!NOTE]
-> The archive tier is not supported as the default access tier for a storage account.
+> The cold tier and the archive tier are not supported as the default access tier for a storage account.
## Setting or changing a blob's tier
To explicitly set a blob's tier when you create it, specify the tier when you up
After a blob is created, you can change its tier in either of the following ways: -- By calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, either directly or via a [lifecycle management](#blob-lifecycle-management) policy. Calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is typically the best option when you're changing a blob's tier from a hotter tier to a cooler one. -- By calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you're rehydrating a blob from the archive tier to an online tier, or moving a blob from cool to hot. By copying a blob, you can avoid the early deletion penalty, if the required storage interval for the source blob hasn't yet elapsed. However, copying a blob results in capacity charges for two blobs, the source blob and the destination blob.
+- By calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, either directly or via a [lifecycle management](#blob-lifecycle-management) policy. Calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is typically the best option when you're changing a blob's tier from a warmer tier to a cooler one.
+
+ > [!NOTE]
+ > You can't rehydrate an archived blob to an online tier by using lifecycle management policies.
+
+- By calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you're rehydrating a blob from the archive tier to an online tier, or moving a blob from cool or cold to hot. By copying a blob, you can avoid the early deletion penalty, if the required storage interval for the source blob hasn't yet elapsed. However, copying a blob results in capacity charges for two blobs, the source blob and the destination blob.
-Changing a blob's tier from hot to cool or archive is instantaneous, as is changing from cool to hot. Rehydrating a blob from the archive tier to either the hot or cool tier can take up to 15 hours.
+Changing a blob's tier from a warmer tier to a cooler one is instantaneous, as is changing from cold or cool to hot. Rehydrating a blob from the archive tier to an online tier such as the hot, cool, or cold tier can take up to 15 hours.
Keep in mind the following points when changing a blob's tier: - You can't call **Set Blob Tier** on a blob that uses an encryption scope. For more information about encryption scopes, see [Encryption scopes for Blob storage](encryption-scope-overview.md). - If a blob's tier is inferred as cool based on the storage account's default access tier and the blob is moved to the archive tier, there's no early deletion charge.-- If a blob is explicitly moved to the cool tier and then moved to the archive tier, the early deletion charge applies.-
-The following table summarizes the approaches you can take to move blobs between various tiers.
-
-| Origin/Destination | Hot tier | Cool tier | Archive tier |
-|--|--|--|--|
-| **Hot tier** | N/A | Change a blob's tier from hot to cool with **Set Blob Tier** or **Copy Blob**. [Learn more...](manage-access-tier.md)<br /><br />Move blobs to the cool tier with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) | Change a blob's tier from hot to archive with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-blob.md) <br /><br />Archive blobs with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) |
-| **Cool tier** | Change a blob's tier from cool to hot with **Set Blob Tier** or **Copy Blob**. [Learn more...](manage-access-tier.md) <br /><br />Move blobs to the hot tier with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) | N/A | Change a blob's tier from cool to archive with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-blob.md) <br /><br />Archive blobs with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) |
-| **Archive tier** | Rehydrate to the hot tier with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-rehydrate-to-online-tier.md) | Rehydrate to cool tier with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-rehydrate-to-online-tier.md) | N/A |
+- If a blob is explicitly moved to the cool or cold tier and then moved to the archive tier, the early deletion charge applies.
## Blob lifecycle management Blob storage lifecycle management offers a rule-based policy that you can use to transition your data to the desired access tier when your specified conditions are met. You can also use lifecycle management to expire data at the end of its life. See [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md) to learn more. > [!NOTE]
-> Data stored in a premium block blob storage account cannot be tiered to hot, cool, or archive using [Set Blob Tier](/rest/api/storageservices/set-blob-tier) or using Azure Blob Storage lifecycle management. To move data, you must synchronously copy blobs from the block blob storage account to the hot tier in a different account using the [Put Block From URL API](/rest/api/storageservices/put-block-from-url) or a version of AzCopy that supports this API. The **Put Block From URL** API synchronously copies data on the server, meaning the call completes only once all the data is moved from the original server location to the destination location.
+> Data stored in a premium block blob storage account cannot be tiered to hot, cool, cold or archive by using [Set Blob Tier](/rest/api/storageservices/set-blob-tier) or using Azure Blob Storage lifecycle management. To move data, you must synchronously copy blobs from the block blob storage account to the hot tier in a different account using the [Put Block From URL API](/rest/api/storageservices/put-block-from-url) or a version of AzCopy that supports this API. The **Put Block From URL** API synchronously copies data on the server, meaning the call completes only once all the data is moved from the original server location to the destination location.
## Summary of access tier options
-The following table summarizes the features of the hot, cool, and archive access tiers.
+The following table summarizes the features of the hot, cool, cold, and archive access tiers.
-| | **Hot tier** | **Cool tier** | **Archive tier** |
-|--|--|--|--|
-| **Availability** | 99.9% | 99% | Offline |
-| **Availability** <br> **(RA-GRS reads)** | 99.99% | 99.9% | Offline |
-| **Usage charges** | Higher storage costs, but lower access and transaction costs | Lower storage costs, but higher access and transaction costs | Lowest storage costs, but highest access, and transaction costs |
-| **Minimum recommended data retention period** | N/A | 30 days<sup>1</sup> | 180 days |
-| **Latency** <br> **(Time to first byte)** | Milliseconds | Milliseconds | Hours<sup>2</sup> |
-| **Supported redundancy configurations** | All | All | LRS, GRS, and RA-GRS<sup>3</sup> only |
+| | **Hot tier** | **Cool tier** | **Cold tier (preview)** |**Archive tier** |
+|--|--|--|--|--|
+| **Availability** | 99.9% | 99% | 99% | Offline |
+| **Availability** <br> **(RA-GRS reads)** | 99.99% | 99.99% | 99.9% | Offline |
+| **Usage charges** | Higher storage costs, but lower access and transaction costs | Lower storage costs, but higher access and transaction costs | Lower storage costs, but higher access and transaction costs | Lowest storage costs, but highest access, and transaction costs |
+| **Minimum recommended data retention period** | N/A | 30 days<sup>1</sup> | 90 days<sup>1</sup> | 180 days |
+| **Latency** <br> **(Time to first byte)** | Milliseconds | Milliseconds | Milliseconds | Hours<sup>2</sup> |
+| **Supported redundancy configurations** | All | All | All |LRS, GRS, and RA-GRS<sup>3</sup> only |
-<sup>1</sup> Objects in the cool tier on general-purpose v2 accounts have a minimum retention duration of 30 days. For Blob Storage accounts, there's no minimum retention duration for the cool tier.
+<sup>1</sup> Objects in the cool tier on general-purpose v2 accounts have a minimum retention duration of 30 days. Objects in the cold tier on general-purpose v2 accounts have a minimum retention duration of 90 days. For Blob Storage accounts, there's no minimum retention duration for the cool or cold tier.
<sup>2</sup> When rehydrating a blob from the archive tier, you can choose either a standard or high rehydration priority option. Each offers different retrieval latencies and costs. For more information, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
In addition to the amount of data stored, the cost of storing data varies depend
### Data access costs
-Data access charges increase as the tier gets cooler. For data in the cool and archive access tier, you're charged a per-gigabyte data access charge for reads.
+Data access charges increase as the tier gets cooler. For data in the cool, cold and archive access tier, you're charged a per-gigabyte data access charge for reads.
### Transaction costs
Keep in mind the following billing impacts when changing a blob's tier:
- When a blob is uploaded or moved between tiers, it's charged at the corresponding rate immediately upon upload or tier change. - When a blob is moved to a cooler tier, the operation is billed as a write operation to the destination tier, where the write operation (per 10,000) and data write (per GB) charges of the destination tier apply.-- When a blob is moved to a warmer tier, the operation is billed as a read from the source tier, where the read operation (per 10,000) and data retrieval (per GB) charges of the source tier apply. Early deletion charges for any blob moved out of the cool or archive tier may apply as well.-- While a blob is being rehydrated from the archive tier, that blob's data is billed as archived data until the data is restored and the blob's tier changes to hot or cool.
+- When a blob is moved to a warmer tier, the operation is billed as a read from the source tier, where the read operation (per 10,000) and data retrieval (per GB) charges of the source tier apply. Early deletion charges for any blob moved out of the cool, cold or archive tier may apply as well.
+- While a blob is being rehydrated from the archive tier, that blob's data is billed as archived data until the data is restored and the blob's tier changes to hot, cool, or cold.
The following table summarizes how tier changes are billed.
-| | **Write charges (operation + access)** | **Read charges (operation + access)** |
-| - | -- | -- |
-| **Set Blob Tier** operation | Hot to cool<br> Hot to archive<br> Cool to archive | Archive to cool<br> Archive to hot<br> cool to hot
+| Write charges (operation + access) | Read charges (operation + access) |
+| -- | -- |
+| Hot to cool<br>Hot to cold<br>Hot to archive<br>Cool to cold<br>Cool to archive<br>Cold to archive | Archive to cold<br>Archive to cool<br>Archive to hot<br>Cold to cool<br>Cold to hot<br>Cool to hot|
+
+Changing the access tier for a blob when versioning is enabled, or if the blob has snapshots, might result in more charges. For information about blobs with versioning enabled, see [Pricing and billing](versioning-overview.md#pricing-and-billing) in the blob versioning documentation. For information about blobs with snapshots, see [Pricing and billing](snapshots-overview.md#pricing-and-billing) in the blob snapshots documentation.
+
+## Cold tier (preview)
+
+The cold tier is currently in PREVIEW and is available in the following regions: Canada Central, Canada East, France Central, France South and Korea Central.
+
+### Enrolling in the preview
+
+To get started, enroll in the preview by using this [form](https://forms.office.com/r/788B1gr3Nq).
+
+You'll receive an email notification when your application is approved and the `ColdTier` feature flag will be registered on your subscription.
+
+### Verifying that you enrolled in the preview
+
+If the `ColdTier` feature flag is registered on your subscription, then you are enrolled in the preview, and you can begin using the cold tier. Use the following steps to ensure that the feature is registered.
+
+#### [Portal](#tab/azure-portal)
+
+In the **Preview features** page of your subscription, locate the **ColdTier** feature, and then make sure that **Registered** appears in the **State** column.
+
+> [!div class="mx-imgBorder"]
+> ![Verify that the feature is registered in Azure portal](./media/access-tiers-overview/cold-tier-feature-registration.png)
+
+#### [PowerShell](#tab/powershell)
+
+To verify that the registration is complete, use the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.
+
+```powershell
+Get-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName ColdTier
+```
+
+#### [Azure CLI](#tab/azure-cli)
+
+To verify that the registration is complete, use the [az feature](/cli/azure/feature#az_feature_show) command.
+
+```azurecli
+az feature show --namespace Microsoft.Storage --name ColdTier
+```
+++
+### Limitations and known issues
+
+- The [change feed](storage-blob-change-feed.md)is not yet compatible with the cold tier.
+- [Point in time restore](point-in-time-restore-overview.md) is not yet compatible with the cold tier.
+- [Object replication](object-replication-overview.md) is not yet compatible with the cold tier.
+- The default access tier setting of the account can't be set to cold tier.
+- blobs can't be set to the cold tier by using AzCopy. During the preview, you can set the blob's tier to the cold tier by using the Azure portal, PowerShell, or the Azure CLI.
+
+### Required REST and SDK versions
+
+If you plan to refer to the cold tier by using code in a custom application, you must use a version of the REST API or SDK that supports the cold tier. If your application uses the [REST API](/rest/api/storageservices/blob-service-rest-api), it must use version 2021-21-02 or later. If your application uses an Azure SDK, please the following versions or later.
+
+| SDK | Minimum version |
+|||
+| [.NET](/dotnet/api/azure.storage.blobs) | 12.15.0-beta.1 |
+| [Java](/java/api/overview/azure/storage-blob-readme) | 12.15.0-beta.1 |
+| [Python](/python/api/azure-storage-blob/) | 12.15.0b1 |
+| [JavaScript](/javascript/api/preview-docs/@azure/storage-blob/) | 12.13.0-beta.1 |
-Changing the access tier for a blob when versioning is enabled, or if the blob has snapshots, may result in more charges. For information about blobs with versioning enabled, see [Pricing and billing](versioning-overview.md#pricing-and-billing) in the blob versioning documentation. For information about blobs with snapshots, see [Pricing and billing](snapshots-overview.md#pricing-and-billing) in the blob snapshots documentation.
## Feature support
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Title: Optimize costs by automatically managing the data lifecycle
-description: Use Azure Storage lifecycle management policies to create automated rules for moving data between hot, cool, and archive tiers.
+description: Use Azure Storage lifecycle management policies to create automated rules for moving data between hot, cool, cold, and archive tiers.
Previously updated : 03/09/2023 Last updated : 05/02/2023
Data sets have unique lifecycles. Early in the lifecycle, people access some dat
With the lifecycle management policy, you can: -- Transition blobs from cool to hot immediately when they're accessed, to optimize for performance.-- Transition current versions of a blob, previous versions of a blob, or blob snapshots to a cooler storage tier if these objects haven't been accessed or modified for a period of time, to optimize for cost. In this scenario, the lifecycle management policy can move objects from hot to cool, from hot to archive, or from cool to archive.
+- Transition blobs from cool, or cold to hot immediately when they're accessed, to optimize for performance.
+- Transition current versions of a blob, previous versions of a blob, or blob snapshots to a cooler storage tier if these objects haven't been accessed or modified for a period of time, to optimize for cost.
- Delete current versions of a blob, previous versions of a blob, or blob snapshots at the end of their lifecycles. - Define rules to be run once per day at the storage account level. - Apply rules to containers or to a subset of blobs, using name prefixes or [blob index tags](storage-manage-find-blobs.md) as filters.
+> [!IMPORTANT]
+> The cold tier is currently in PREVIEW and is available in the following regions: Canada Central, Canada East, France Central, France South and Korea Central.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To enroll, see [Cold tier (preview)](access-tiers-overview.md#cold-tier-preview).
+ Consider a scenario where data is frequently accessed during the early stages of the lifecycle, but only occasionally after two weeks. Beyond the first month, the data set is rarely accessed. In this scenario, hot storage is best during the early stages. Cool storage is most appropriate for occasional access. Archive storage is the best tier option after the data ages over a month. By moving data to the appropriate storage tier based on its age with lifecycle management policy rules, you can design the least expensive solution for your needs. Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts. Lifecycle management doesn't affect system containers such as the `$logs` or `$web` containers.
Lifecycle management supports tiering and deletion of current versions, previous
| Action | Current Version | Snapshot | Previous Versions | |--|--||-| | tierToCool | Supported for `blockBlob` | Supported | Supported |
+| tierToCold | Supported for `blockBlob` | Supported | Supported |
| enableAutoTierToHotFromCool<sup>1</sup> | Supported for `blockBlob` | Not supported | Not supported | | tierToArchive<sup>4</sup> | Supported for `blockBlob` | Supported | Supported | | delete<sup>2,3</sup> | Supported for `blockBlob` and `appendBlob` | Supported | Supported |
The platform runs the lifecycle policy once a day. Once you configure a policy,
### If I update an existing policy, how long does it take for the actions to run?
-The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete. If the update is to disable or delete a rule, and enableAutoTierToHotFromCool was used, auto-tiering to Hot tier will still happen. For example, set a rule including enableAutoTierToHotFromCool based on last access. If the rule is disabled/deleted, and a blob is currently in cool and then accessed, it will move back to Hot as that is applied on access outside of lifecycle management. The blob won't then move from Hot to Cool given the lifecycle management rule is disabled/deleted. The only way to prevent autoTierToHotFromCool is to turn off last access time tracking.
+The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete. If the update is to disable or delete a rule, and enableAutoTierToHotFromCool was used, auto-tiering to Hot tier will still happen. For example, set a rule including enableAutoTierToHotFromCool based on last access. If the rule is disabled/deleted, and a blob is currently in cool or cold and then accessed, it will move back to Hot as that is applied on access outside of lifecycle management. The blob won't then move from hot to cool or cold given the lifecycle management rule is disabled/deleted. The only way to prevent autoTierToHotFromCool is to turn off last access time tracking.
### The run completes but doesn't move or delete some blobs
If there's a lifecycle management policy in effect for the storage account, then
- Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier. -- If the blob needs to stay in the hot or cool tier permanently, copy the blob to another location where the lifecycle manage policy isn't in effect.
+- If the blob needs to stay in the hot, cool, or cold tier permanently, copy the blob to another location where the lifecycle manage policy isn't in effect.
### The blob prefix match string didn't apply the policy to the expected blobs
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
Title: Configure a lifecycle management policy
-description: Configure a lifecycle management policy to automatically move data between hot, cool, and archive tiers during the data lifecycle.
+description: Configure a lifecycle management policy to automatically move data between hot, cool, cold, and archive tiers during the data lifecycle.
Previously updated : 12/21/2022 Last updated : 05/02/2023
ms.devlang: azurecli
Azure Storage lifecycle management offers a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle. A lifecycle policy acts on a base blob, and optionally on the blob's versions or snapshots. For more information about lifecycle management policies, see [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md).
-A lifecycle management policy is comprised of one or more rules that define a set of actions to take based on a condition being met. For a base blob, you can choose to check one of the following conditions:
+A lifecycle management policy is composed of one or more rules that define a set of actions to take based on a condition being met. For a base blob, you can choose to check one of the following conditions:
- The number of days since the blob was created. - The number of days since the blob was last modified.
For a blob snapshot or version, the condition that is checked is the number of d
## Optionally enable access time tracking
-Before you configure a lifecycle management policy, you can choose to enable blob access time tracking. When access time tracking is enabled, a lifecycle management policy can include an action based on the time that the blob was last accessed with a read or write operation.To minimize the effect on read access latency, only the first read of the last 24 hours updates the last access time. Subsequent reads in the same 24-hour period don't update the last access time. If a blob is modified between reads, the last access time is the more recent of the two values.
+Before you configure a lifecycle management policy, you can choose to enable blob access time tracking. When access time tracking is enabled, a lifecycle management policy can include an action based on the time that the blob was last accessed with a read or write operation. To minimize the effect on read access latency, only the first read of the last 24 hours updates the last access time. Subsequent reads in the same 24-hour period don't update the last access time. If a blob is modified between reads, the last access time is the more recent of the two values.
#### [Portal](#tab/azure-portal)
A lifecycle management policy must be read or written in full. Partial updates a
## See also - [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)-- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
+- [Access tiers for blob data](access-tiers-overview.md)
storage Monitor Blob Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage-reference.md
Previously updated : 10/02/2020 Last updated : 05/02/2023
Azure Storage supports following dimensions for metrics in Azure Monitor.
| Dimension Name | Description | | - | -- | | **BlobType** | The type of blob for Blob metrics only. The supported values are **BlockBlob**, **PageBlob**, and **Azure Data Lake Storage**. Append blobs are included in **BlockBlob**. |
-| **Tier** | Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. See more in [Azure Storage blob tier](../blobs/access-tiers-overview.md). The supported values include: <br/> <li>**Hot**: Hot tier</li> <li>**Cool**: Cool tier</li> <li>**Archive**: Archive tier</li> <li>**Premium**: Premium tier for block blob</li> <li>**P4/P6/P10/P15/P20/P30/P40/P50/P60**: Tier types for premium page blob</li> <li>**Standard**: Tier type for standard page Blob</li> <li>**Untiered**: Tier type for general purpose v1 storage account</li> |
+| **Tier** | Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. See more in [Azure Storage blob tier](../blobs/access-tiers-overview.md). The supported values include: <br><br>**Hot**: Hot tier<br>**Cool**: Cool tier<br>**Cold**: Cold tier<br>**Archive**: Archive tier<br>**Premium**: Premium tier for block blob<br>**P4/P6/P10/P15/P20/P30/P40/P50/P60**: Tier types for premium page blob<br>**Standard**: Tier type for standard page Blob<br>**Untiered**: Tier type for general purpose v1 storage account |
For the metrics supporting dimensions, you need to specify the dimension value to see the corresponding metrics values. For example, if you look at **Transactions** value for successful responses, you need to filter the **ResponseType** dimension with **Success**. If you look at **BlobCount** value for Block Blob, you need to filter the **BlobType** dimension with **BlockBlob**.
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
az storage account update -g <resource-group> -n <storage-account> --enable-sftp
+## Disable SFTP support
+
+This section shows you how to disable SFTP support for an existing storage account. Because SFTP support incurs an hourly cost, consider disabling SFTP support when clients are not actively using SFTP to transfer data.
+
+### [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com/), navigate to your storage account.
+
+2. Under **Settings**, select **SFTP**.
+
+3. Select **Disable SFTP**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the disable SFTP button.](./media/secure-file-transfer-protocol-support-how-to/sftp-enable-option-disable.png)
+
+### [PowerShell](#tab/powershell)
+
+To disable SFTP support, call the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command and set the `-EnableSftp` parameter to false. Remember to replace the values in angle brackets with your own values:
+
+```powershell
+$resourceGroupName = "<resource-group>"
+$storageAccountName = "<storage-account>"
+
+Set-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName -EnableSftp $false
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To disable SFTP support, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and set the `--enable-sftp` parameter to false. Remember to replace the values in angle brackets with your own values:
+
+```azurecli
+az storage account update -g <resource-group> -n <storage-account> --enable-sftp=false
+```
+++ ## Configure permissions Azure Storage doesn't support shared access signature (SAS), or Azure Active directory (Azure AD) authentication for accessing the SFTP endpoint. Instead, you must use an identity called local user that can be secured with an Azure generated password or a secure shell (SSH) key pair. To grant access to a connecting client, the storage account must have an identity associated with the password or key pair. That identity is called a *local user*.
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Previously updated : 10/20/2022 Last updated : 04/03/2023
You can't set custom passwords, rather Azure generates one for you. If you choos
A public-private key pair is the most common form of authentication for Secure Shell (SSH). The private key is secret and should be known only to the local user. The public key is stored in Azure. When an SSH client connects to the storage account using a local user identity, it sends a message with the public key and signature. Azure validates the message and checks that the user and key are recognized by the storage account. To learn more, see [Overview of SSH and keys](../../virtual-machines/linux/ssh-from-windows.md#).
-If you choose to authenticate with private-public key pair, you can either generate one, use one already stored in Azure, or provide Azure the public key of an existing public-private key pair. You can have a maxiumum of 10 public keys per local user.
+If you choose to authenticate with private-public key pair, you can either generate one, use one already stored in Azure, or provide Azure the public key of an existing public-private key pair. You can have a maximum of 10 public keys per local user.
## Container permissions
See the [limitations and known issues article](secure-file-transfer-protocol-kno
## Pricing and billing
-Enabling the SFTP endpoint has an hourly cost. We will start applying this hourly cost on or after January 1, 2023. For the latest pricing information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+Enabling the SFTP endpoint has an hourly cost. For the latest pricing information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+
+> [!TIP]
+> To avoid passive charges, consider enabling SFTP only when you are actively using it to transfer data. For guidance about how to enable and then disable SFTP support, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md).
Transaction, storage, and networking prices for the underlying storage account apply. To learn more, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md#understand-the-full-billing-model-for-azure-blob-storage).
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 03/23/2023 Last updated : 05/02/2023
The following table describes whether a feature is supported in a standard gener
| Storage feature | Default | HNS | NFS | SFTP | ||-|||--|
-| [Access tier - archive](access-tiers-overview.md) | &#x2705; | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> |
-| [Access tier - cool](access-tiers-overview.md) | &#x2705; | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup>| &#x2705;<sup>3</sup> |
-| [Access tier - hot](access-tiers-overview.md) | &#x2705; | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> |
+| [Access tiers (hot, cool, cold, and archive)](access-tiers-overview.md) | &#x2705; | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> |
| [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> | | [Azure DNS Zone endpoints (preview)](../common/storage-account-overview.md?toc=/azure/storage/blobs/toc.json#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Blob inventory](blob-inventory.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
The following table describes whether a feature is supported in a premium block
| Storage feature | Default | HNS | NFS | SFTP | ||-|||--|
-| [Access tier - archive](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Access tier - cool](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Access tier - hot](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Access tiers (hot, cool, cold, and archive)](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> | | [Azure DNS Zone endpoints (preview)](../common/storage-account-overview.md?toc=/azure/storage/blobs/toc.json#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Blob inventory](blob-inventory.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
storage Storage Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-plan-manage-costs.md
Previously updated : 03/13/2023 Last updated : 04/03/2023
Some actions, such as changing the default access tier of your account, can lead
| Access tiers | Rehydrating from archive | High priority rehydration from archive can lead to higher than normal bills. Microsoft recommends reserving high-priority rehydration for use in emergency data restoration situations. <br><br>For more information, see [Rehydration priority](../blobs/archive-rehydrate-overview.md#rehydration-priority).| | Data protection | Enabling blob soft delete | Overwriting blobs can lead to blob snapshots. Unlike the case where a blob is deleted, the creation of these snapshots isn't logged. This can lead to unexpected storage costs. Consider whether data that is frequently overwritten should be placed in an account that doesn't have soft delete enabled.<br><br>For more information, see [How overwrites are handled when soft delete is enabled](../blobs/soft-delete-blob-overview.md#how-overwrites-are-handled-when-soft-delete-is-enabled).| | Data protection | Enabling blob versioning | Every write operation on a blob creates a new version. As is the case with enabling blob soft delete, consider whether data that is frequently overwritten should be placed in an account that doesn't have versioning enabled. <br><br>For more information, see [Versioning on write operations](../blobs/versioning-overview.md#versioning-on-write-operations). |
-| Monitoring | Enabling Storage Analytics logs (classic logs)| Storage analytics logs can accumulate in your account over time if the retention policy is not set. Make sure to set the retention policy to avoid log build up which can lead to unexpected capacity charges.<br><br>For more information, see [Modify log data retention period](manage-storage-analytics-logs.md#modify-log-data-retention-period) |
+| Monitoring | Enabling Storage Analytics logs (classic logs)| Storage analytics logs can accumulate in your account over time if the retention policy is not set. Make sure to set the retention policy to avoid log buildup which can lead to unexpected capacity charges.<br><br>For more information, see [Modify log data retention period](manage-storage-analytics-logs.md#modify-log-data-retention-period) |
+| Protocols | Enabling SSH File Transfer Protocol (SFTP) support| Enabling the SFTP endpoint incurs an hourly cost. To avoid passive charges, consider enabling SFTP only when you are actively using it to transfer data.<br><br> For guidance about how to enable and then disable SFTP support, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](../blobs/secure-file-transfer-protocol-support-how-to.md). |
## FAQ
storage Storage Solution Large Dataset Low Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-large-dataset-low-network.md
The following table summarizes the differences in key capabilities.
| | Data Box Disk | Data Box | Data Box Heavy | Import/Export | |-||--||-| | **Data size** | Up to 35 TBs | Up to 80 TBs per device | Up to 800 TB per device | Variable |
-| **Data type** | Azure Blobs | Azure Blobs<br>Azure Files | Azure Blobs<br>Azure Files | Azure Blobs<br>Azure Files |
+| **Data type** | Azure Blobs<br>Azure Files* | Azure Blobs<br>Azure Files | Azure Blobs<br>Azure Files | Azure Blobs<br>Azure Files |
| **Form factor** | 5 SSDs per order | 1 X 50-lbs. desktop-sized device per order | 1 X ~500-lbs. large device per order | Up to 10 HDDs/SSDs per order | | **Initial setup time** | Low <br>(15 mins) | Low to moderate <br> (<30 mins) | Moderate<br>(1-2 hours) | Moderate to difficult<br>(variable) | | **Send data to Azure** | Yes | Yes | Yes | Yes |
The following table summarizes the differences in key capabilities.
| **Use when data moves** |Within a commerce boundary|Within a commerce boundary|Within a commerce boundary|Across geographic boundaries, e.g. US to EU| | **Pricing** | [Pricing](https://azure.microsoft.com/pricing/details/databox/disk/) | [Pricing](https://azure.microsoft.com/pricing/details/storage/databox/) | [Pricing](https://azure.microsoft.com/pricing/details/storage/databox/heavy/) | [Pricing](https://azure.microsoft.com/pricing/details/storage-import-export/) |
+*\* Data Box Disk does not support Large File Shares and does not preserve file metadata.*
+ ## Next steps - Understand how to
storage Storage Solution Large Dataset Moderate High Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-large-dataset-moderate-high-network.md
If using offline data transfer, use the following table to understand the differ
| | Data Box Disk | Data Box | Data Box Heavy | Import/Export | |-||--||-| | **Data size** | Up to 35 TBs | Up to 80 TBs per device | Up to 800 TB per device | Variable |
-| **Data type** | Azure Blobs | Azure Blobs<br>Azure Files | Azure Blobs<br>Azure Files | Azure Blobs<br>Azure Files |
+| **Data type** | Azure Blobs<br>Azure Files* | Azure Blobs<br>Azure Files | Azure Blobs<br>Azure Files | Azure Blobs<br>Azure Files |
| **Form factor** | 5 SSDs per order | 1 X 50-lbs. desktop-sized device per order | 1 X ~500-lbs. large device per order | Up to 10 HDDs/SSDs per order | | **Initial setup time** | Low <br>(15 mins) | Low to moderate <br> (<30 mins) | Moderate<br>(1-2 hours) | Moderate to difficult<br>(variable) | | **Send data to Azure** | Yes | Yes | Yes | Yes |
If using offline data transfer, use the following table to understand the differ
| **Use when data moves** |Within a commerce boundary|Within a commerce boundary|Within a commerce boundary|Across geographic boundaries, e.g. US to EU| | **Pricing** | [Pricing](https://azure.microsoft.com/pricing/details/databox/disk/) | [Pricing](https://azure.microsoft.com/pricing/details/storage/databox/) | [Pricing](https://azure.microsoft.com/pricing/details/storage/databox/heavy/) | [Pricing](https://azure.microsoft.com/pricing/details/storage-import-export/) |
+*\* Data Box Disk does not support Large File Shares and does not preserve file metadata*
+ If using online data transfer, use the table in the following section for high network bandwidth. ### High network bandwidth
storage Files Troubleshoot Smb Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-authentication.md
The solution is to add the privateLink FQDN to the storage account's Azure AD ap
1. Select **Manifest** in the left pane. 1. Copy and paste the existing content so you have a duplicate copy. Replace all instances of `<storageaccount>.file.core.windows.net` with `<storageaccount>.privatelink.file.core.windows.net`. 1. Review the content and select **Save** to update the application object with the new identifierUris.
+1. Update any internal DNS references to point to the private link.
1. Retry mounting the share. ## Need help?
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Azure Files customers can now use identity-based Kerberos authentication for Lin
Nconnect is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the Linux client and the Azure Premium Files service for NFSv4.1. With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership. For more information, see [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).
+#### Improved Azure File Sync service availability
+
+Azure File Sync is now a zone-redundant service, which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully leverage this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Storage redundancy](../common/storage-redundancy.md).
+
+Note: Azure File Sync is zone-redundant in all regions that [support zones](../../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support) except US Gov Virginia.
+ ## What's new in 2022 ### 2022 quarter 4 (October, November, December)
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 01/03/2023 Last updated : 05/03/2023
recommendations: false
# Enable Azure Active Directory Domain Services authentication on Azure Files [!INCLUDE [storage-files-aad-auth-include](../../../includes/storage-files-aad-auth-include.md)]
-This article focuses on enabling and configuring Azure AD DS for identity-based authentication with Azure file shares.
+This article focuses on enabling and configuring Azure AD DS for identity-based authentication with Azure file shares. In this authentication scenario, Azure AD credentials and Azure AD DS credentials are the same and can be used interchangeably.
We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the AD source you choose.
Azure Files authentication with Azure AD DS is available in [all Azure Public, G
Before you enable Azure AD DS authentication over SMB for Azure file shares, verify that your Azure AD and Azure Storage environments are properly configured. We recommend that you walk through the [prerequisites](#prerequisites) to make sure you've completed all the required steps.
-Next, do the following things to grant access to Azure Files resources with Azure AD credentials:
+Follow these steps to grant access to Azure Files resources with Azure AD credentials:
1. Enable Azure AD DS authentication over SMB for your storage account to register the storage account with the associated Azure AD DS deployment.
-2. Assign share-level permissions to an Azure AD identity (a user, group, or service principal).
-3. Connect to your Azure file share using a storage account key and configure Windows access control lists (ACLs) for directories and files.
-4. Mount an Azure file share from a domain-joined VM.
+1. Assign share-level permissions to an Azure AD identity (a user, group, or service principal).
+1. Connect to your Azure file share using a storage account key and configure Windows access control lists (ACLs) for directories and files.
+1. Mount an Azure file share from a domain-joined VM.
The following diagram illustrates the end-to-end workflow for enabling Azure AD DS authentication over SMB for Azure Files.
storage Atempo Quick Start Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/atempo-quick-start-guide.md
+
+ Title: Migrate data to Azure with Atempo Miria
+description: Getting started guide to implement Atempo Miria infrastructure with Azure Storage. This article helps you integrate the Atempo Miria Infrastructure with Azure storage.
++++ Last updated : 05/02/2023++
+# Atempo Quick Start Guide
+
+This document will provide assistance in getting started with configuring Atemop Miria to migrate data to Azure Storage.
+
+## Reference Architecture
+
+The following diagram provides a reference architecture for on-premises to Azure deployments.
++
+Your existing Atempo Miria deployment can easily integrate with Azure by adding and configuring a connection to Azure, either a standard connection or an ExpressRoute.
+
+## Before you Begin
+
+A little upfront planning helps configure your Miria software to use Azure as a data migration target.
+
+### Get Started with Azure
+
+Microsoft offers a framework to get you started with Azure. The Cloud Adoption Framework (CAF) is a detailed approach to enterprise digital transformation and a comprehensive guide to planning a production-grade cloud adoption. The CAF includes a step-by-step Azure setup guide to help you get up and running quickly and securely. You can find an interactive version in the Azure portal. You can find sample architectures, specific best practices for deploying applications, and free training resources to put you on the path to Azure expertise.
+
+### Considerations For Migrations
+
+Several aspects are important when considering migrations of file data to Azure. Before proceeding learn more
+
+- Storage Migration Overview
+- Latest supported features by Miria in Migration tools comparison matrix
+
+Remember, you need enough network capacity to support migrations without impacting production applications. This section outlines the tools and techniques that are available to assess your network needs.
+
+### Determine Unutilized Internet Bandwidth
+
+It's important to know how much unutilized bandwidth (or headroom) you have available on a day-to-day basis. To help you assess whether you can meet your goals for
+
+- Initial time for migrations
+- Time required to do incremental resync before final switch-over to the target file service
+
+Use the following methods to identify the bandwidth headroom that is free to consume
+
+- If you're an existing Azure ExpressRoute customer, view your circuit usage in the Azure portal
+- Contact your ISP and request reports to show your existing daily and monthly utilization
+- There are several tools that can measure utilization by monitoring your network traffic at the router/switch level
+
+ - SolarWinds Bandwidth Analyzer Pack
+ - Paessler PRTG
+ - Cisco Network Assistant
+ - WhatsUp Gold
+
+## Implementation Guidance
+
+### Before you begin
+
+This documentation assumes that you already have a Miria Server and Miria Data Mover installed and running. Reference the following documentation for detailed information on how to install Miria Server and Data Mover
+
+- [Miria Server and Data Movers deployment and initial configuration](https://www.atempo.com/privatedocs/Miria_2022_Migration_Documentation.pdf)
+- [Details on platforms and OS versions supported by Miria](https://usergroup.atempo.com/wp-content/uploads/2021/08/COMPATIBILITY-GUIDE_MIRIA_2021.pdf)
+
+The following section guides you in successive steps:
+
+1. Creating and configuring your Azure BLOB Storage
+2. Creating a Miria Target Storage - Azure BLOB
+3. Creating a Miria Source Storage with SMB/CIFS share
+4. Creating and launching your data migration task
+5. Checking on progress, logs, and reports at the project and task level
+6. Creating other tasks in your migration project
+
+### Azure BLOB configuration
+
+This section provides a brief guide for how to add Azure BLOB to an on-premises-to-Azure Miria deployment
+
+1. Open the Azure portal, and search for storage accounts
++
+2. Select Create to add an account:
+
+- Select an existing resource group or Create new
+- Provide a unique name for your storage account
+- Select the region
+- Select Standard or Premium performance, depending on your needs
+- Select the Redundancy that meets your data protection requirements
++
+3. Next, we recommend the default settings from the Advanced screen
++
+4. Keep the default networking options for now and move on to Data protection. You can select to enable soft delete, which allows you to recover accidentally deleted data within the defined retention period. Soft delete offers protection against accidental or malicious deletion
++
+5. Add tags for organization if you use tagging and Create your account
+
+6. Another step is mandatory before you can add the account to your Miria environment. Navigate to the Access keys item under Security + Networking and copy the Storage account name and one of the two access keys.
+
+
+7. Under Data Storage, create a Container with a unique name
++
+8. Optional - Configure extra security best practices
+
+### Creating a Miria Target Storage: Azure BLOB
+
+1. In Miria Web UI, you need to declare the Azure storage and the newly created bucket. To do so, navigate to Infrastructure in the left pane, then select Object storage & application
++
+2. Select the New Storage Manager button on top right
++
+3. In the "Type" drop-down list, select Microsoft Azure BLOB Block among Cloud entries and select Next.
++
+4. Select a Storage Manager name (here SM_Azure) and replace placeholder with your Account name in the Network address field:
++
+5. In the Default proxy platform drop-down list, select the desired Data Mover or Data Mover Pool (here WIN-H9K5NN91J0H) used to reach out to your Azure storage
++
+6. Select Create at the bottom
++
+Once the Storage Manager is successfully created, we need to create the Miria container associated to this bucket. To do so, select the Back button to display the list of Storage Managers.
+
+7. Select the three dots located at the end of the line associated to the Storage Manager we created and select Add Container
++
+8. Select a Storage Manager Container name (here SMC_Azure) and activate the toggle Available as Source to support future workflows. Name the source platform (here Azure)
++
+9. Scroll down to Available as Source toggle and select "Enabled" to support future workflows using this SMC as a source. Name the source platform (here Azure).
++
+10. Scroll down to the Configuration section at the bottom and type Azure account name, its Access Key and Container Name
++
+Access tier ΓÇ£DefaultΓÇ¥ matches the one chosen during the Azure storage account creation (Step 3 above).
+
+Then select Create at the bottom. Your SMC is successfully created, select Back to get back to the home screen
++
+Congratulations! Your Azure storage and bucket are now fully declared and ready to use.
+
+### Creating a Miria Source Storage for a Windows file server
+
+In this example, we're moving data from an SMB/CIFS share of a Windows file server (our source storage) to Azure (our target storage).
+To create the source storage in Miria:
+
+1. Navigate to the Infrastructure item on the left pane, then select NAS
++
+2. Select the New NAS button in the top right
++
+3. In the NAS Type drop-down list, select Other
+
+4. In the "Protocol" radio button, select Windows (CIFS)
++
+5. Under Data Movers, select Single agent or Pool (depending on your setup) and add a Windows Data Mover
+
+6. Select Next at the bottom right
++
+7. In the Stream option text box, add ΓÇ£host=ΓÇ¥ followed by FQDN (or IP address) of your NAS
+
+8. Select Next
++
+9. Select a NAS name
+
+10. Add the credentials you want to use for Data migration accessing this share
+
+11. Select Create
++
+Congratulations! Your Windows file server is now ready to be used as a source
++
+## Start your migration
+
+### Creating and starting your data migration task
+
+1. Now you can create your Migration project by selecting Migration in the left pane and New Project:
++
+2. Select the "New Project" button
++
+3. Select your source and target from the drop-down lists and select Next
++
+4. Select the folder containing data to migrate on the left side of the panel and select "Add". This folder appears in the selection list located in the below section of the window
++
+Once your folder selection is complete, select Next
+
+5. At this step, you may select among different advanced options if needed. Review them and select Next
++
+6. Select a name for your task then select Create
++
+7. You may now start your migration by clicking Start
++
+The task runs
++
+and after a period is completed
+++
+You may monitor on the Azure side that your container is populated with your data
++
+### Checking on migration progress, logs, and reports
+
+In the above step, we have created two objects at once
+
+- A migration project,
+- And a task in this migration project
+
+You might want to create a migration task per subset of data to migrate, for instance by usage, user group, department, project, etc. to have more control of the migration of each data sets
+
+The Web interface offer multiple options to check on progress
+
+- At the project level - to get a global view of the progress of all tasks created in the project,
+- At the individual task level - to check on the progress for a specific task such as the data subset
+
+To access the logs or more details on the tasks in this project, select on the three dots located at the end to display this menu
+
+- By selecting Show Logs, you see the logs for all tasks in the project
+- By selecting See Tasks, you see a graphical overview of the volume associated to all tasks as shown in this screenshot
++
+The above screenshot provides an overview of all iterations of the migration tasks in your project. We currently have only one iteration for our task. We can easily launch a new iteration of the same task to collect all the latest and changed files since the last run. Select the task in the bottom part of the panel and select Start Task. Each iteration of the task is shown on the above interface.
+
+The bottom part of the screen is listing the tasks created in this project.
++
+To drill down on an individual task, just apply a similar process, select the three-dot menu at the end of the task line to display the task-related submenu
++
+- Selecting Show logs shows the logs for this task only.
+- Selecting Integrity check provides access to the associated report for this task.
+- Selecting See Details provides a graphical volume report with details at the task level as shown in the following screenshot
++
+The lower part of the screen provides more details on the job run.
++
+To download the report associated to this task, select the three-dot menu at the end of the line and select Download Report
+
+### Creating other tasks in your migration project
+
+More tasks can be added to a Migration project going back to the project level, by clicking the word project in the ΓÇ£bread crumbΓÇ¥
++
+To add a task, use the top menu with the three horizontal dots, select New Task and follow a similar process to create your task within the project
++
+After the administrator adds multiple tasks to the project, the Start menu on the top provides a way to start new iterations for all the tasks in this project at once.
+
+## Support
+
+When you need help with your migration to Azure solution, you should open a case with both Atempo and Azure
+
+### To Open a Case with Atempo
+
+On the Atempo Support Site, sign into your account using the credentials received with your Miria package and open a case
+
+### To Open a Case with Azure
+
+Search for Support in the Azure portal search bar. Select Help + support -> New Support Request
+
+## Next steps
+
+Learn more about the process and recommendations for migrating data to Azure Storage
+
+- [Azure Storage migration overview](../data-management/azure-file-migration-program-solutions.md)
+
+Learn more about Miria, its configuration, and deployment by visiting the documentation provided with your Miria product package
+
+- Miria for Migration
+- Miria User Manual
+- Miria Installation and Getting Started
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md
NA Previously updated : 08/22/2022- Last updated : 05/02/2023++ # StorSimple 8000 series: a hybrid cloud storage solution
+> [!CAUTION]
+> **ACTION REQUIRED:** StorSimple Data Manager, StorSimple Device Manager, StorSimple 1200, and StorSimple 8000 have reached their end of support. We're no longer updating this content regularly. Check the Microsoft Product Lifecycle for information about how this product, service, technology, or API is supported.
## Overview Welcome to Microsoft Azure StorSimple, an integrated storage solution that manages storage tasks between on-premises devices and Microsoft Azure cloud storage. StorSimple is an efficient, cost-effective, and easy to manage storage area network (SAN) solution that eliminates many of the issues and expenses that are associated with enterprise storage and data protection. It uses the proprietary StorSimple 8000 series device, integrates with cloud services, and provides a set of management tools for a seamless view of all enterprise storage, including cloud storage. The StorSimple deployment information published on the Microsoft Azure website applies to StorSimple 8000 series devices only.
update-center Configure Wu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/configure-wu-agent.md
Title: Configure Windows Update settings in Update management center (Preview) description: This article tells how to configure Windows update settings to work with Update management center (Preview). Previously updated : 04/21/2022 Last updated : 05/02/2023
The registry keys listed in [Configuring Automatic Updates by editing the regist
## Enable updates for other Microsoft products
-By default, the Windows Update client is configured to provide updates only for Windows. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other products, including security patches for Microsoft SQL Server and other Microsoft software. You can configure this option if you have downloaded and copied the latest [Administrative template files](https://support.microsoft.com/help/3087759/how-to-create-and-manage-the-central-store-for-group-policy-administra) available for Windows 2016 and later.
+By default, the Windows Update client is configured to provide updates only for Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products, including security patches for Microsoft SQL Server and other Microsoft software.
-If you have machines running Windows Server 2012 R2, you can't configure this setting through Group Policy. Run the following PowerShell command on these machines:
+Use one of the following options to perform the settings change at scale:
-```powershell
-$ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
-$ServiceManager.Services
-$ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
-$ServiceManager.AddService2($ServiceId,7,"")
-```
+- For Servers configured to patch on a schedule from Update management center (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
+
+ ```powershell
+ $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
+ $ServiceManager.Services
+ $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
+ $ServiceManager.AddService2($ServiceId,7,"")
+ ```
+
+- For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
## Make WSUS configuration settings
update-center Manage Updates Customized Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-updates-customized-images.md
+
+ Title: Overview of customized images in Update management center (preview).
+description: The article describes about customized images, how to register, validate the customized images for public preview and its limitations.
+++ Last updated : 05/02/2023+++
+# Manage updates for customized images
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+
+This article describes the customized image support, how to enable the subscription and its limitations.
++
+## Asynchronous check to validate customized image support
+
+If you're using the Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update management Center (preview) operations such as **Assess now**, **Install Patches now**, or **Schedule patching** to validate if the assets are supported for guest patching and then initiate patching if their asset is supported.
+
+Unlike PIR/marketplace images where support is validated even before Update management center operation is triggered. Here, there are no pre-existing validations in place and the Update management center operations are triggered and only their success or failure determines support.
+
+For instance, assessment call, will attempt to fetch the latest patch that is available from the image's OS family to check support. It stores this support-related data in Azure Resource Graph (ARG) table, which you can query to see the support status for your Azure Compute Gallery image.
+
+> [!NOTE]
+> - Currently, we support [generalized Azure Compute Gallery (SIG) custom images](../virtual-machines/linux/imaging.md#generalized-images). Automatic VM guest patching for generalized custom images is not supported.
+> - [Specialized Azure Compute Gallery (SIG) custom images](../virtual-machines/linux/imaging.md#specialized-images) and non-Azure Compute gallery images (including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery) are not supported.
++
+## Enable Subscription for Public Preview
+
+To self register your subscription for Public preview in Azure portal, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **More services**.
+
+ :::image type="content" source="./media/manage-updates-customized-images/access-more-services.png" alt-text="Screenshot that shows how to access more services option.":::
+
+1. In **All services** page, search for *Preview Features*.
+
+ :::image type="content" source="./media/manage-updates-customized-images/access-preview-services.png" alt-text="Screenshot that shows how to access preview features.":::
+
+1. In **Preview features** page, enter *gallery* and select *VM Guest Patch Gallery Image Preview*.
+
+ :::image type="content" source="./media/manage-updates-customized-images/access-gallery.png" alt-text="Screenshot that shows how to access gallery.":::
+
+1. In **VM Guest Patch Gallery Image Preview**, select **Register** to register your subscription.
+
+ :::image type="content" source="./media/manage-updates-customized-images/register-preview.png" alt-text="Screenshot that shows how to register the preview feature.":::
++
+## Prerequisites to test the Azure Compute Gallery custom images (preview)
+
+- Register the subscription for preview using the steps mentioned in [Enable Subscription for Public Preview](#enable-subscription-for-public-preview).
+- Ensure that the VM in which you intend to execute the API calls must be in the same subscription that is enrolled for the feature.
+
+## Check the preview
+
+Initiate the asynchronous support check using either of the following APIs:
+
+1. **API Action Invocation**
+ 1. [Assess patches](https://learn.microsoft.com/rest/api/compute/virtual-machines/assess-patches?tabs=HTTP)
+ 1. [Install patches](https://learn.microsoft.com/rest/api/compute/virtual-machines/install-patches?tabs=HTTP)
+
+1. **Portal operations**: Try the preview:
+ 1. [On demand check for updates](view-updates.md).
+ 1. [One-time update](deploy-updates.md).
+
+**Validate the VM support state**
+
+1. **Azure Resource Graph**
+ 1. Table
+ - `patchassessmentresources`
+ 1. Resource
+ - `Microsoft.compute/virtualmachines/patchassessmentresults/configurationStatus.vmGuestPatchReadiness.detectedVMGuestPatchSupportState. [Possible values: Unknown, Supported, Unsupported, UnableToDetermine]`
+
+ :::image type="content" source="./media/manage-updates-customized-images/resource-graph-view.png" alt-text="Screenshot that shows the resource in Azure Resource Graph Explorer.":::
+
+We recommend that you execute the Assess Patches API once the VM is provisioned and the prerequisites are set for Public preview. This validates the support state of the VM. If the VM is supported, you can execute the Install Patches API to initiate the patching.
+
+## Limitations
+
+1. Currently, it is only applicable to Azure Compute Gallery (SIG) images and not to non-Azure Compute Gallery custom images. The Azure Compute Gallery images are of two types - generalized and specialized. Following are the supported scenarios for both:
+
+ | Images | **Currently supported scenarios** | **Unsupported scenarios** |
+ | | | |
+ | **Azure Compute Gallery: Generalized images** | - On demand assessment </br> - On demand patching </br> - Periodic assessment </br> - Scheduled patching | Automatic VM guest patching |
+ | **Azure Compute Gallery: Specialized images** | - On demand assessment </br> - On demand patching </br> - Periodic assessment </br> - Scheduled patching </br> - Automatic VM guest patching |
+ | **Non-Azure Compute Gallery images (non-SIG)** | None | - On demand assessment </br> - On demand patching </br> - Periodic assessment </br> - Schedule patching </br> - Automatic VM guest patching |
+
+1. Automatic VM guest patching will not work on Azure Compute Gallery images even if Patch orchestration mode is set to **Azure orchestrated/AutomaticByPlatform**. You can use scheduled patching to patch the machines and define your own schedules.
++
+## Next steps
+* [Learn more](support-matrix.md) about supported operating systems.
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
description: Provides a summary of supported regions and operating system settin
Previously updated : 04/21/2022 Last updated : 05/02/2023
Update management center (preview) supports operating system updates for both Wi
> Update management center (preview) doesn't support driver Updates. ### First party updates on Windows
-By default, the Windows Update client is configured to provide updates only for Windows. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other products, including security patches for Microsoft SQL Server and other Microsoft software. You can configure this option if you have downloaded and copied the latest [Administrative template files](https://support.microsoft.com/help/3087759/how-to-create-and-manage-the-central-store-for-group-policy-administra) available for Windows 2016 and later.
+By default, the Windows Update client is configured to provide updates only for Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products, including security patches for Microsoft SQL Server and other Microsoft software.
-If you have machines running Windows Server 2012 R2, you can't configure this setting through **Group Policy**. Run the following PowerShell command on these machines:
+Use one of the following options to perform the settings change at scale:
+
+- For Servers configured to patch on a schedule from Update management center (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
+
+ ```powershell
+ $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
+ $ServiceManager.Services
+ $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
+ $ServiceManager.AddService2($ServiceId,7,"")
+ ```
+- For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
-```powershell
-$ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
-$ServiceManager.Services
-$ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
-$ServiceManager.AddService2($ServiceId,7,"")
-```
### Third-party updates **Windows**: Update Management relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows update management to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
United States | Central US </br> East US </br> East US 2</br> North Central US <
## Supported operating systems
-The following table lists the supported operating systems for Azure VMs and Azure Arc-enabled servers. Before you enable update management center (preview), ensure that the target machines meet the operating system requirements.
-
+> [!NOTE]
+> - All operating systems are assumed to be x64. x86 isn't supported for any operating system.
+> - Update management center (preview) doesn't support CIS hardened images.
# [Azure VMs](#tab/azurevm-os)
->[!NOTE]
-> - For [Azure VMs](../virtual-machines/index.yml), we currently support a combination of Offer, Publisher, and SKU of the VM image. Ensure you match all three to confirm support.
-> - See the list of [supported OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images).
-> - Custom images are currently not supported.
+> [!NOTE]
+> Currently, we don't support [Specialized Azure Compute Gallery (SIG) custom images](../virtual-machines/linux/imaging.md#specialized-images) and non-Azure Compute gallery images (including the VMs created by Azure Migrate, Azure Backup, Azure Site Recovery etc.).
+
+**Marketplace/PIR images**
+
+Currently, we support a combination of Offer, Publisher, and Sku of the image. Ensure that you match all the three to confirm support. For more information, see [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md).
+
+**Custom images**
+
+We support [generalized Azure Compute Gallery (SIG) custom images](../virtual-machines/linux/imaging.md#generalized-images). Table below lists the operating systems that we support for generalized Azure Compute Gallery images. Refer to [Azure Compute Gallery (SIG) custom images (preview)](manage-updates-customized-images.md) for instructions on how to start using Update manage center to manage updates on custom images.
+
+ |**Windows Operating System**|
+ |-- |
+ |Windows Server 2022|
+ |Windows Server 2019|
+ |Windows Server 2016|
+ |Windows Server 2012 R2|
+ |Windows Server 2012|
+ |Windows Server 2008 R2 (RTM and SP1 Standard)|
++
+ |**Linux Operating System**|
+ |-- |
+ |CentOS 7.8|
+ |Oracle Linux 7.x, 8x|
+ |Red Hat Enterprise 7, 8, 9|
+ |SUSE Linux Enterprise Server 12.x, 15.0-15.4|
+ |Ubuntu 16.04 LTS, 18.04 LTS, 20.04 LTS, 22.04 LTS|
+ # [Azure Arc-enabled servers](#tab/azurearc-os)
-[Azure Arc-enabled servers](../azure-arc/servers/overview.md) are:
+The table lists the operating systems supported on [Azure Arc-enabled servers](../azure-arc/servers/overview.md) are:
+
+ |**Operating System**|
+ |-|
+ | Windows Server 2012 R2 and higher (including Server Core) |
+ | Windows Server 2008 R2 SP1 with PowerShell enabled and .NET Framework 4.0+ |
+ | Ubuntu 16.04, 18.04, and 20.04 LTS (x64) |
+ | CentOS Linux 7 and 8 (x64) |
+ | SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) |
+ | Red Hat Enterprise Linux (RHEL) 7, 8, 9 (x64) |
+ | Amazon Linux 2 (x64) |
+ | Oracle 7.x, 8.x|
+ | Debian 10 and 11|
+ | Rocky Linux 8|
++
- | Publisher | Operating System
+## Unsupported Operating systems
+
+The following table lists the operating systems that aren't supported:
+
+ | **Operating system**| **Notes**
|-|-|
- | Microsoft Corporation | Windows Server 2012 R2 and higher (including Server Core) |
- | Microsoft Corporation | Windows Server 2008 R2 SP1 with PowerShell enabled and .NET Framework 4.0+ |
- | Canonical | Ubuntu 16.04, 18.04, and 20.04 LTS (x64) |
- | Red Hat | CentOS Linux 7 and 8 (x64) |
- | SUSE | SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) |
- | Red Hat | Red Hat Enterprise Linux (RHEL) 7 and 8 (x64) |
- | Amazon | Amazon Linux 2 (x64) |
- | Oracle | Oracle 7.x |
+ | Windows client | For client operating systems such as Windows 10 and Windows 11, we recommend [Microsoft Intune](https://learn.microsoft.com/mem/intune/) to manage updates.|
+ | Virtual machine scale sets| We recommend that you use [Automatic upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) to patch the virtual machine scale sets.|
+ | Azure Kubernetes Nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/azure/aks/node-updates-kured).|
-
-As the Update management center (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md).
+As the Update management center (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md).
- > [!NOTE]
- > For patching, update management center (preview) relies on classification data available on the machine. Unlike other distributions, CentOS YUM package manager does not have this information available in the RTM version to classify updates and packages in different categories.
- ## Next steps - [View updates for single machine](view-updates.md) - [Deploy updates now (on-demand) for single machine](deploy-updates.md) - [Schedule recurring updates](scheduled-patching.md)-- [Manage update settings via Portal](manage-update-settings.md)
+- [Manage update settings via Portal](manage-update-settings.md)
virtual-desktop Manage App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manage-app-groups.md
The deployment process will do the following things for you:
- Register the application group, if you chose to do so. - Create a link to an Azure Resource Manager template based on your configuration that you can download and save for later.
+Once a user connects to a RemoteApp, any other RemoteApps that they connect to during the same session will be from the same session host.
+ >[!IMPORTANT] >You can only create 500 application groups for each Azure Active Directory tenant. We added this limit because of service limitations for retrieving feeds for our users. This limit doesn't apply to application groups created in Azure Virtual Desktop (classic).
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
Title: Prerequisites for Azure Virtual Desktop
description: Find what prerequisites you need to complete to successfully connect your users to their Windows desktops and applications. Previously updated : 08/08/2022 Last updated : 05/03/2023
At a high level, you'll need:
> [!div class="checklist"] > - An Azure account with an active subscription
-> - An identity provider
-> - A supported operating system
+> - A supported identity provider
+> - A supported operating system for session host virtual machines
> - Appropriate licenses > - Network connectivity > - A Remote Desktop client
At a high level, you'll need:
You'll need an Azure account with an active subscription to deploy Azure Virtual Desktop. If you don't have one already, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Your account must be assigned the [contributor or owner role](../role-based-access-control/built-in-roles.md) on your subscription.
-You also need to make sure you've registered the *Microsoft.DesktopVirtualization* resource provider for your subscription. To check the status of the resource provider and register if needed:
+You also need to make sure you've registered the *Microsoft.DesktopVirtualization* resource provider for your subscription. To check the status of the resource provider and register if needed, select the relevant tab for your scenario and follow the steps.
> [!IMPORTANT] > You must have permission to register a resource provider, which requires the `*/register/action` operation. This is included if your account is assigned the [contributor or owner role](../role-based-access-control/built-in-roles.md) on your subscription.
+# [Azure portal](#tab/portal)
+ 1. Sign in to the [Azure portal](https://portal.azure.com).+ 1. Select **Subscriptions**.+ 1. Select the name of your subscription.+ 1. Select **Resource providers**.+ 1. Search for **Microsoft.DesktopVirtualization**.+ 1. If the status is *NotRegistered*, select **Microsoft.DesktopVirtualization**, and then select **Register**.
-1. Verify that the status of Microsoft.DesktopVirtualization is **Registered**.
+
+1. Verify that the status of Microsoft.DesktopVirtualization is *Registered*.
+
+# [Azure CLI](#tab/cli)
++
+2. Register the **Microsoft.DesktopVirtualization** resource provider by running the following command. You can run this even if the resource provider is already registered.
+
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.DesktopVirtualization
+ ```
+
+3. Verify that the parameter **RegistrationState** is set to *Registered* by running the following command:
+
+ ```azurecli-interactive
+ az provider show \
+ --namespace Microsoft.DesktopVirtualization \
+ --query {RegistrationState:registrationState}
+ ```
+
+# [Azure PowerShell](#tab/powershell)
++
+2. Register the **Microsoft.DesktopVirtualization** resource provider by running the following command. You can run this even if the resource provider is already registered.
+
+ ```azurepowershell-interactive
+ Register-AzResourceProvider -ProviderNamespace Microsoft.DesktopVirtualization
+ ```
+
+3. In the output, verify that the parameters **RegistrationState** are set to *Registered*. You can also run the following command:
+
+ ```azurepowershell-interactive
+ Get-AzResourceProvider -ProviderNamespace Microsoft.DesktopVirtualization
+ ```
++ ## Identity
You have a choice of operating systems that you can use for session hosts to pro
|Operating system |User access rights| |||
-|<ul><li>[Windows 11 Enterprise multi-session](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 11 Enterprise](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 10 Enterprise multi-session](/lifecycle/products/windows-10-enterprise-and-education)</li><li>[Windows 10 Enterprise](/lifecycle/products/windows-10-enterprise-and-education)</li><ul>|License entitlement:<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>External users can use [per-user access pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) instead of license entitlement.</li></ul>|
+|<ul><li>[Windows 11 Enterprise multi-session](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 11 Enterprise](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 10 Enterprise multi-session](/lifecycle/products/windows-10-enterprise-and-education)</li><li>[Windows 10 Enterprise](/lifecycle/products/windows-10-enterprise-and-education)</li><ul>|License entitlement:<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>External users can use [per-user access pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) by enrolling an Azure subscription instead of license entitlement.</li></ul>|
|<ul><li>[Windows Server 2022](/lifecycle/products/windows-server-2022)</li><li>[Windows Server 2019](/lifecycle/products/windows-server-2019)</li><li>[Windows Server 2016](/lifecycle/products/windows-server-2016)</li><li>[Windows Server 2012 R2](/lifecycle/products/windows-server-2012-r2)</li></ul>|License entitlement:<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses.</li></ul>Per-user access pricing is not available for Windows Server operating systems.| > [!IMPORTANT]
You can deploy virtual machines (VMs) to be used as session hosts from these ima
- Manually, in the Azure portal and [adding to a host pool after you've created it](expand-existing-host-pool.md). - Programmatically, with [Azure CLI, PowerShell](create-host-pools-powershell.md), or [REST API](/rest/api/desktopvirtualization/).
+If your license entitles you to use Azure Virtual Desktop, you don't need to install or apply a separate license, however if you're using per-user access pricing for external users, you will need to [enroll an Azure Subscription](remote-app-streaming/per-user-access-pricing.md). You will need to make sure the Windows license used on your session hosts is correctly assigned in Azure and the operating system is activated. For more information, see [Apply Windows license to session host virtual machines](apply-windows-license.md).
+ There are different automation and deployment options available depending on which operating system and version you choose, as shown in the following table: |Operating system|Azure Image Gallery|Manual VM deployment|Azure Resource Manager template integration|Deploy host pools from Azure Marketplace|
virtual-desktop Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/licensing.md
Here's a summary of the two types of licenses for Azure Virtual Desktop you can
- Pay-as-you-go through an Azure meter - Cost per user each month depends on user behavior - Only includes access rights to Azure Virtual Desktop
+ - Includes use rights to leverage [FSlogix](/fslogix/overview-what-is-fslogix)
> [!IMPORTANT] > Per-user access pricing only supports Windows 10 Enterprise multi-session and Windows 11 Enterprise multi-session. Per-user access pricing currently doesn't support Windows Server session hosts.
virtual-desktop Screen Capture Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/screen-capture-protection.md
You must connect to Azure Virtual Desktop with one of the following clients to u
To configure screen capture protection:
-1. Download the [Azure Virtual Desktop policy templates file](https://aka.ms/avdgpo) (AVDGPTemplate.cab) and extract the contents of the cab file and zip archive.
+1. Download the [Azure Virtual Desktop policy templates file](https://aka.ms/avdgpo) (*AVDGPTemplate.cab*). You can use File Explorer to open *AVDGPTemplate.cab*, then extract the zip archive inside the *AVDGPTemplate.cab* file to a temporary location.
2. Copy the **terminalserver-avd.admx** file to the **%windir%\PolicyDefinitions** folder. 3. Copy the **en-us\terminalserver-avd.adml** file to the **%windir%\PolicyDefinitions\en-us** folder. 4. To confirm the files copied correctly, open the Group Policy Editor and go to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. You should see one or more Azure Virtual Desktop policies, as shown in the following screenshot.
virtual-desktop Troubleshoot Statuses Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-statuses-checks.md
Title: Azure Virtual Desktop session host statuses and health checks
description: How to troubleshoot the failed session host statuses and failed health checks Previously updated : 04/21/2023 Last updated : 05/03/2023
The following table lists all statuses for session hosts in the Azure portal eac
| Session host status | Description | How to resolve related issues | |||| |Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it's still listed as ΓÇ£Available." |N/A|
-|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. You can find which health checks have failed in the session hosts detailed view in the Azure portal. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
+|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. In this state, users can connect to VMs, but their user experience may degrade. You can find which health checks failed in the Azure portal by going to the **Session hosts** tab and selecting the name of your session host. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
|Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status changes to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. |Turn on the session host. | |Unavailable| The session host is either turned off or hasn't passed fatal health checks, which prevents user sessions from connecting to this session host. |If the session host is off, turn it back on. If the session host didn't pass the domain join check or side-by-side stack listener health checks, refer to the table in [Health check](#health-check) for ways to resolve the issue. If the status is still "Unavailable" after following those directions, open a support case.| |Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This status doesn't affect new nor existing user sessions. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).|
virtual-machines Disk Bursting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-bursting.md
Title: Managed disk bursting
description: Learn about disk bursting for Azure disks and Azure virtual machines. Previously updated : 02/22/2023 Last updated : 05/02/2023
virtual-machines Disks Enable Bursting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-bursting.md
Title: Enable on-demand disk bursting
description: Enable on-demand disk bursting on your managed disk. Previously updated : 10/12/2022 Last updated : 05/02/2023
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
Last updated 04/05/2022-+ # Ebdsv5 and Ebsv5 series
The memory-optimized Ebsv5 and Ebdsv5 Azure virtual machine (VM) series deliver higher remote storage performance in each VM size than the [Ev4 series](ev4-esv4-series.md). The increased remote storage performance of the Ebsv5 and Ebdsv5 VMs is ideal for storage throughput-intensive workloads. For example, relational databases and data analytics applications.
-The Ebsv5 and Ebdsv5 VMs offer up to 120000 IOPS and 4000 MBps of remote disk storage throughput. Both series also include up to 512 GiB of RAM. The Ebdsv5 series has local SSD storage up to 2400 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. Standard SSDs and Standard HDD disk storage aren't supported in the Ebv5 series.
+The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. Standard SSDs and Standard HDD disk storage aren't supported in the Ebv5 series.
The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration. The series are ideal for various memory-intensive enterprise applications. They feature:
The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake)
- [Intel® Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) - [Intel® Advanced Vector Extensions 512 (Intel® AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) - Support for [Intel® Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html)
+- NVMe interface for higher remote disk storage IOPS and throughput performance
> [!IMPORTANT] > - Accelerated networking is required and turned on by default on all Ebsv5 and Ebdsv5 VMs. > - Ebsv5 and Ebdsv5-series VMs can [burst their disk performance](disk-bursting.md) and get up to their bursting max for up to 30 minutes at a time.
+> - The E112i size is offered as NVMe only to provide the highest IOPS and throughput performance. If you wish to achieve higher remote storage performance for small sizes, refer to the [instructions](enable-nvme-interface.md) on how to switch to the NVMe interface for sizes ranging from 2-96 vCPU. See the NVMe VM spec table to see the improved performance details.
+> - Please note that the NVMe capability is only available in the following regions: US North, Southeast Asia, West Europe, Australia East, North Europe, West US 3, UK South, Sweden Central, East US, Central US, West US2, East US 2, South Central US.
## Ebdsv5 series
-Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors. The Ebdsv5 VM sizes feature up to 512 GiB of RAM, in addition to fast and large local SSD storage (up to 2400 GiB). These VMs are ideal for memory-intensive enterprise applications and applications that benefit from high remote storage performance, low latency, high-speed local storage. Remote Data disk storage is billed separately from VMs.
+Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors. The Ebdsv5 VM sizes feature up to 672 GiB of RAM, in addition to fast and large local SSD storage (up to 3800 GiB). These VMs are ideal for memory-intensive enterprise applications and applications that benefit from high remote storage performance, low latency, high-speed local storage. Remote Data disk storage is billed separately from VMs.
- [Premium Storage](premium-storage-performance.md): Supported - [Premium Storage caching](premium-storage-performance.md): Supported
Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processo
- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported (required) - [Ephemeral OS Disks](ephemeral-os-disks.md): Supported - Nested virtualization: Supported
+- NVMe Interface: Supported only on Generation 2 VMs
+- SCSI Interface: Supported on Generation 1 and 2 VMs
+## Ebdsv5 Series (SCSI)
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | ||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 |
Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processo
| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/2000 | 88000/2500 | 120000/4000 | 117920/2500|160000/4000| 8 | 16000 | | Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/3000 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 16000 | | Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 20000 |
+| Standard_E96bds_v5 | 96 | 672 | 3600 | 32 | 450000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 25000 |
-
+## Ebdsv5 Series (NVMe)
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+|||||||||||||
+| Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 |
+| Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
+| Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |
+| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500 |
+| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/2000 | 88000/2500 | 120000/4000 | 117920/2500|160000/4000| 8 | 16000 |
+| Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/3000 | 132000/4000 | 150000/5000 | 160000/4000|160000/4000 | 8 | 16000 |
+| Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 176000/5000 | 200000/5000 |160000/4000 | 160000/4000| 8 | 20000 |
+| Standard_E96bds_v5 | 96 | 672 | 3600 | 32 | 450000/4000 | 260000/7500 | 260000/8000 |260000/6500 | 260000/6500 | 8 | 25000 |
+| Standard_E112ibds_v5 | 112| 672 | 3800 | 64 | 450000/4000 | 260000/8000 | 260000/8000 |260000/6500 | 260000/6500| 8 | 40000 |
## Ebsv5 series Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These VMs are ideal for memory-intensive enterprise applications and applications that benefit from high remote storage performance but with no local SSD storage. Ebsv5-series VMs feature Intel® Hyper-Threading Technology. Remote Data disk storage is billed separately from VMs.
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported (required) - [Ephemeral OS Disks](ephemeral-os-disks.md): Not supported - Nested virtualization: Supported-
+- NVMe Interface: Supported only on Generation 2 VMs
+- SCSI Interface: Supported on Generation 1 and Generation 2 VMs
+## Ebsv5 Series (SCSI)
| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | | | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 |
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
| Standard_E32bs_v5 | 32 | 256 | 32 | 88000/2500 | 120000/4000 |117920/2500 |160000/4000 | 8 | 16000 | | Standard_E48bs_v5 | 48 | 384 | 32 | 120000/4000 | 120000/4000 | 160000/4000| 160000/4000| 8 | 16000 | | Standard_E64bs_v5 | 64 | 512 | 32 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 20000 |
+| Standard_E96bs_v5 | 96 | 672 | 32 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 25000 |
+## Ebsv5 Series (NVMe)
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| | | | | | | | | | |
+| Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 |
+| Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
+| Standard_E8bs_v5 | 8 | 64 | 16 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |
+| Standard_E16bs_v5 | 16 | 128 | 32 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500
+| Standard_E32bs_v5 | 32 | 256 | 32 | 88000/2500 | 120000/4000 |117920/2500 |160000/4000 | 8 | 16000 |
+| Standard_E48bs_v5 | 48 | 384 | 32 | 132000/4000 | 150000/5000 | 160000/4000| 160000/4000| 8 | 16000 |
+| Standard_E64bs_v5 | 64 | 512 | 32 | 176000/5000 | 200000/5000 | 160000/4000|160000/4000 | 8 | 20000 |
+| Standard_E96bs_v5 | 96 | 672 | 32 | 260000/75000 | 260000/8000 | 260000/6500|260000/6500 | 8 | 25000 |
+| Standard_E112ibs_v5 | 112| 672 | 64 | 260000/8000 | 260000/8000 | 260000/6500|260000/6500 | 8 | 40000 |
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
- [High performance compute](sizes-hpc.md) - [Previous generations](sizes-previous-gen.md)
-Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+ ## Next steps
+- [Enabling NVMe Interface](enable-nvme-interface.md)
+- [Enable NVMe FAQs](enable-nvme-faqs.yml)
- Use the Azure [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
+
+ Title: Enable NVMe Interface.
+description: Enable NVMe interface on virtual machine
+++++ Last updated : 05/01/2023+++++
+# Enabling NVMe and SCSI Interface on Virtual Machine
++
+NVMe stands for nonvolatile memory express, which is a communication protocol that facilitates faster and more efficient data transfer between servers and storage systems. With NVMe, data can be transferred at the highest throughput and with the fastest response time. Azure now supports the NVMe interface on the Ebsv5 and Ebdsv5 family, offering the highest IOPS and throughput performance for remote disk storage among all the GP v5 VM series.
+
+SCSI (Small Computer System Interface) is a legacy standard for physically connecting and transferring data between computers and peripheral devices. Although Ebsv5 VM sizes still support SCSI, we recommend switching to NVMe for better performance benefits.
+
+## Prerequisites
+
+A new feature has been added to the VM configuration, called DiskControllerType, which allows customers to select their preferred controller type as NVMe or SCSI. If the customer doesn't specify a DiskControllerType value then the platform will automatically choose the default controller based on the VM size configuration. If the VM size is configured for SCSI as the default and supports NVMe, SCSI will be used unless updated to the NVMe DiskControllerType.
+
+To enable the NVMe interface, the following prerequisites must be met:
+
+- Choose a VM family that supports NVMe. It's important to note that only Ebsv5 and Ebdsv5 VM sizes are equipped with NVMe in the Intel v5 generation VMs. Make sure to select either one of the series, Ebsv5 or Ebdsv5 VM.
+- Select the operating system image that is tagged with NVMe support
+- Opt-in to NVMe by selecting NVMe disk controller type in Azure portal or ARM/CLI/Power Shell template. For step-by-step instructions, refer here
+- Only Gen2 images are supported
+- Choose one of the Azure regions where NVMe is enabled
+
+By meeting the above five conditions, you'll be able to enable NVMe on the supported VM family in no time. Please follow the above conditions to successfully create or resize a VM with NVMe without any complications. Refer to the [FAQ](enable-nvme-faqs.yml) to learn about NVMe enablement.
+## OS Images supported
+
+### Linux
+| Distribution | Image |
+|--||
+| Almalinux 8.x (currently 8.7) | almalinux: almalinux:8-gen2: latest |
+| Almalinux 9.x (currently 9.1) | almalinux: almalinux:9-gen2: latest |
+| Debian 11 | Debian: debian-11:11-gen2: latest |
+| CentOS 7.9 | openlogic: centos:7_9-gen2: latest |
+| RHEL 7.9 | RedHat: RHEL:79-gen2: latest |
+| RHEL 8.6 | RedHat: RHEL:86-gen2: latest |
+| RHEL 8.7 | RedHat: RHEL:87-gen2: latest |
+| RHEL 9.1 | RedHat: RHEL:91-gen2: latest |
+| Ubuntu 18.04 | Canonical: UbuntuServer:18_04-lts-gen2: latest |
+| Ubuntu 20.04 | Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2: latest |
+| Ubuntu 22.04 | canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2: latest |
+++++++
+### Windows
+
+- [Azure portal - Plan ID: 2019-datacenter-core-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore)
+- [Azure portal - Plan ID: 2019-datacenter-core-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore2019-datacenter-core-smalldisk-g2)
+- [Azure portal - Plan ID: 2019 datacenter-core](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCore)
+- [Azure portal - Plan ID: 2019-datacenter-core-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCore2019-datacenter-core-g2)
+- [Azure portal - Plan ID: 2019-datacenter-core-with-containers-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCorewithContainers)
+- [Azure portal - Plan ID: 2019-datacenter-core-with-containers-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCorewithContainers2019-datacenter-core-with-containers-smalldisk-g2)
+- [Azure portal - Plan ID: 2019-datacenter-with-containers-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterwithContainers2019-datacenter-with-containers-smalldisk-g2)
+- [Azure portal - Plan ID: 2019-datacenter-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019Datacenter)
+- [Azure portal - Plan ID: 2019-datacenter-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019Datacenter2019-datacenter-smalldisk-g2)
+- [Azure portal - Plan ID: 2019-datacenter-zhcn](https://portal.azure.com/#create/Microsoft.WindowsServer2019Datacenterzhcn)
+- [Azure portal - Plan ID: 2019-datacenter-zhcn-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019Datacenterzhcn2019-datacenter-zhcn-g2)
+- [Azure portal - Plan ID: 2019-datacenter-core-with-containers](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCorewithContainers)
+- [Azure portal - Plan ID: 2019-datacenter-core-with-containers-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCorewithContainers2019-datacenter-core-with-containers-g2)
+- [Azure portal - Plan ID: 2019-datacenter-with-containers](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterwithContainers)
+- [Azure portal - Plan ID: 2019-datacenter-with-containers-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterwithContainers2019-datacenter-with-containers-g2)
+- [Azure portal - Plan ID: 2019-datacenter](https://portal.azure.com/#create/Microsoft.WindowsServer2019Datacenter)
+- [Azure portal - Plan ID: 2019-datacenter-gensecond](https://portal.azure.com/#create/Microsoft.WindowsServer2019Datacenter2019-datacenter-gensecond)
+- [Azure portal - Plan ID: 2022-datacenter-core](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-core)
+- [Azure portal - Plan ID: 2022-datacenter-core-g2](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-core-g2)
+- [Azure portal - Plan ID: 2022-datacenter-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-smalldisk)
+- [Azure portal - Plan ID: 2022-datacenter-smalldisk-g2](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-smalldisk-g2)
+- [Azure portal - Plan ID: 2022-datacenter](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter)
+- [Azure portal - Plan ID: 2022-datacenter-g2](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-g2)
+- [Azure portal - Plan ID: 2022-datacenter-core-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-core-smalldisk)
+- [Azure portal - Plan ID: 2022-datacenter-core-smalldisk-g2](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-core-smalldisk-g2)
+- [Azure portal - Plan ID: 2022-datacenter-azure-edition-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-smalldisk)
+- [Azure portal - Plan ID: 2022-datacenter-azure-edition](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition)
+- [Azure portal - Plan ID: 2022-datacenter-azure-edition-core](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core)
+- [Azure portal - Plan 2022-datacenter-azure-edition-core-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core-smalldisk)
+
+## Launching a VM with NVMe interface
+NVMe can be enabled during VM creation using various methods such as: Azure portal, CLI, PowerShell, and ARM templates. To create an NVMe VM, you must first enable the NVMe option on a VM and select the NVMe controller disk type for the VM. Note that the NVMe diskcontrollertype can be enabled during creation or updated to NVMe when the VM is stopped and deallocated, provided that the VM size supports NVMe.
+
+### Azure portal View
+1. Add Disk Controller Filter. To find the NVMe eligible sizes, select **See All Sizes**, select the **Disk Controller** filter, and then select **NVMe**:
+
+ :::image type="content" source="./media/enable-nvme/azure-portal-1.png" alt-text="Screenshot of instructions to add disk controller filter for NVMe interface.":::
+
+1. Enable NVMe feature by visiting the **Advanced** tab.
+
+ :::image type="content" source="./media/enable-nvme/azure-portal-2.png" alt-text="Screenshot of instructions to enable NVMe interface feature.":::
+
+1. Verify Feature is enabled by going to **Review and Create**.
+
+ :::image type="content" source="./media/enable-nvme/azure-portal-3.png" alt-text="Screenshot of instructions to review and verify features enablement.":::
+
+### Sample ARM template
+
+```json
++
+{
+    "apiVersion": "2022-08-01",
+    "type": "Microsoft.Compute/virtualMachines",
+    "name": "[variables('vmName')]",
+    "location": "[parameters('location')]",
+    "identity": {
+        "type": "userAssigned",
+        "userAssignedIdentities": {
+            "/subscriptions/ <EnterSubscriptionIdHere> /resourcegroups/ManagedIdentities/providers/Microsoft.ManagedIdentity/userAssignedIdentities/KeyVaultReader": {}
+        }
+    },
+    "dependsOn": [
+        "[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
+    ],
+    "properties": {
+        "hardwareProfile": {
+            "vmSize": "[parameters('vmSize')]"
+        },
+        "osProfile": "[variables('vOsProfile')]",
+        "storageProfile": {
+            "imageReference": "[parameters('osDiskImageReference')]",
+            "osDisk": {
+                "name": "[variables('diskName')]",
+                "caching": "ReadWrite",
+                "createOption": "FromImage"
+            },
+            "copy": [
+                {
+                    "name": "dataDisks",
+                    "count": "[parameters('numDataDisks')]",
+                    "input": {
+                        "caching": "[parameters('dataDiskCachePolicy')]",
+                        "writeAcceleratorEnabled": "[parameters('writeAcceleratorEnabled')]",
+                        "diskSizeGB": "[parameters('dataDiskSize')]",
+                        "lun": "[add(copyIndex('dataDisks'), parameters('lunStartsAt'))]",
+ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»ΓÇ»Γ&#