Updates from: 04/11/2023 01:06:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md
Previously updated : 01/23/2023 Last updated : 04/10/2023 zone_pivot_groups: app-provisioning-cross-tenant-synchronization
You can test the feature by triggering disable / deletion events by setting the
Let the provisioning job run (20 ΓÇô 40 mins) and navigate back to the provisioning page. You'll see the provisioning job in quarantine and can choose to allow the deletions or review the provisioning logs to understand why the deletions occurred.
-## Common de-provisioning scenarios to test
+## Common deprovisioning scenarios to test
- Delete a user / put them into the recycle bin. - Block sign in for a user. - Unassign a user or group from the application (or configuration). - Remove a user from a group that's providing them access to the application (or configuration).
-To learn more about de-provisioning scenarios, see [How Application Provisioning Works](how-provisioning-works.md#de-provisioning).
+To learn more about deprovisioning scenarios, see [How Application Provisioning Works](how-provisioning-works.md#deprovisioning).
## Frequently Asked Questions
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 04/04/2023 Last updated : 04/10/2023
The **Azure AD Provisioning Service** provisions users to SaaS apps and other sy
## Provisioning using SCIM 2.0
-The Azure AD provisioning service uses the [SCIM 2.0 protocol](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/bg-p/IdentityStandards) for automatic provisioning. The service connects to the SCIM endpoint for the application, and uses SCIM user object schema and REST APIs to automate the provisioning and de-provisioning of users and groups. A SCIM-based provisioning connector is provided for most applications in the Azure AD gallery. Developers use the SCIM 2.0 user management API in Azure AD to build endpoints for their apps that integrate with the provisioning service. For details, see [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+The Azure AD provisioning service uses the [SCIM 2.0 protocol](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/bg-p/IdentityStandards) for automatic provisioning. The service connects to the SCIM endpoint for the application, and uses SCIM user object schema and REST APIs to automate the provisioning and deprovisioning of users and groups. A SCIM-based provisioning connector is provided for most applications in the Azure AD gallery. Developers use the SCIM 2.0 user management API in Azure AD to build endpoints for their apps that integrate with the provisioning service. For details, see [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md).
To request an automatic Azure AD provisioning connector for an app that doesn't currently have one, see [Azure Active Directory Application Request](../manage-apps/v2-howto-app-gallery-listing.md).
Credentials are required for Azure AD to connect to the application's user manag
When you enable user provisioning for a third-party SaaS application, the Azure portal controls its attribute values through attribute mappings. Mappings determine the user attributes that flow between Azure AD and the target application when user accounts are provisioned or updated.
-There's a pre-configured set of attributes and attribute mappings between Azure AD user objects and each SaaS appΓÇÖs user objects. Some apps manage other types of objects along with Users, such as Groups.
+There's a preconfigured set of attributes and attribute mappings between Azure AD user objects and each SaaS appΓÇÖs user objects. Some apps manage other types of objects along with Users, such as Groups.
When setting up provisioning, it's important to review and configure the attribute mappings and workflows that define which user (or group) properties flow from Azure AD to the application. Review and configure the matching property (**Match objects using this attribute**) that is used to uniquely identify and match users/groups between the two systems.
When you configure provisioning to a SaaS application, one of the types of attri
For outbound provisioning from Azure AD to a SaaS application, relying on [user or group assignments](../manage-apps/assign-user-or-group-access-portal.md) is the most common way to determine which users are in scope for provisioning. Because user assignments are also used for enabling single sign-on, the same method can be used for managing both access and provisioning. Assignment-based scoping doesn't apply to inbound provisioning scenarios such as Workday and Successfactors.
-* **Groups.** With an Azure AD Premium license plan, you can use groups to assign access to a SaaS application. Then, when the provisioning scope is set to **Sync only assigned users and groups**, the Azure AD provisioning service provisions or de-provisions users based on whether they're members of a group that's assigned to the application. The group object itself isn't provisioned unless the application supports group objects. Ensure that groups assigned to your application have the property "SecurityEnabled" set to "True".
+* **Groups.** With an Azure AD Premium license plan, you can use groups to assign access to a SaaS application. Then, when the provisioning scope is set to **Sync only assigned users and groups**, the Azure AD provisioning service provisions or deprovisions users based on whether they're members of a group that's assigned to the application. The group object itself isn't provisioned unless the application supports group objects. Ensure that groups assigned to your application have the property "SecurityEnabled" set to "True".
* **Dynamic groups.** The Azure AD user provisioning service can read and provision users in [dynamic groups](../enterprise-users/groups-create-rule.md). Keep these caveats and recommendations in mind: * Dynamic groups can impact the performance of end-to-end provisioning from Azure AD to SaaS applications.
- * How fast a user in a dynamic group is provisioned or de-provisioned in a SaaS application depends on how fast the dynamic group can evaluate membership changes. For information about how to check the processing status of a dynamic group, see [Check processing status for a membership rule](../enterprise-users/groups-create-rule.md).
+ * How fast a user in a dynamic group is provisioned or deprovisioned in a SaaS application depends on how fast the dynamic group can evaluate membership changes. For information about how to check the processing status of a dynamic group, see [Check processing status for a membership rule](../enterprise-users/groups-create-rule.md).
- * When a user loses membership in the dynamic group, it's considered a de-provisioning event. Consider this scenario when creating rules for dynamic groups.
+ * When a user loses membership in the dynamic group, it's considered a deprovisioning event. Consider this scenario when creating rules for dynamic groups.
* **Nested groups.** The Azure AD user provisioning service can't read or provision users in nested groups. The service can only read and provision users that are immediate members of an explicitly assigned group. This limitation of "group-based assignments to applications" also affects single sign-on (see [Using a group to manage access to SaaS applications](../enterprise-users/groups-saasapps.md)). Instead, directly assign or otherwise [scope in](define-conditional-rules-for-provisioning-user-accounts.md) the groups that contain the users who need to be provisioned.
Performance depends on whether your provisioning job is running an initial provi
All operations run by the user provisioning service are recorded in the Azure AD [Provisioning logs (preview)](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context). The logs include all read and write operations made to the source and target systems, and the user data that was read or written during each operation. For information on how to read the provisioning logs in the Azure portal, see the [provisioning reporting guide](./check-status-user-account-provisioning.md).
-## De-provisioning
-The Azure AD provisioning service keeps source and target systems in sync by de-provisioning accounts when user access is removed.
+## Deprovisioning
+The Azure AD provisioning service keeps source and target systems in sync by deprovisioning accounts when user access is removed.
The provisioning service supports both deleting and disabling (sometimes referred to as soft-deleting) users. The exact definition of disable and delete varies based on the target app's implementation, but generally a disable indicates that the user can't sign in. A delete indicates that the user has been removed completely from the application. For SCIM applications, a disable is a request to set the *active* property to false on a user.
Confirm the mapping for *active* for your application. If you're using an applic
**Configure your application to delete a user** The scenario triggers a disable or a delete:
-* A user is soft-deleted in Azure AD (sent to the recycle bin / AccountEnabled property set to false).
- 30 days after a user is deleted in Azure AD, they're permanently deleted from the tenant. At this point, the provisioning service sends a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application.
+* A user is soft-deleted in Azure AD (sent to the recycle bin / AccountEnabled property set to false). Thirty days after a user is deleted in Azure AD, they're permanently deleted from the tenant. At this point, the provisioning service sends a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application.
* A user is permanently deleted / removed from the recycle bin in Azure AD. * A user is unassigned from an app. * A user goes from in scope to out of scope (doesn't pass a scoping filter anymore).
active-directory Application Proxy Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectors.md
The server needs to have TLS 1.2 enabled before you install the Application Prox
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319] "SchUseStrongCrypto"=dword:00000001 ```
+ A `regedit` file you can use to set these values follows:
+
+ ```
+ Windows Registry Editor Version 5.00
+
+ [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2]
+ [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
+ "DisabledByDefault"=dword:00000000
+ "Enabled"=dword:00000001
+ [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
+ "DisabledByDefault"=dword:00000000
+ "Enabled"=dword:00000001
+ [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319]
+ "SchUseStrongCrypto"=dword:00000001
+ ```
+
1. Restart the server For more information about the network requirements for the connector server, see [Get started with Application Proxy and install a connector](application-proxy-add-on-premises-application.md).
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
Tenants are set to either Pre-migration or Migration in Progress by default, dep
## Known issues and limitations - In recent updates we removed the ability to target individual users. Previously targeted users will remain in the policy but we recommend moving them to a targeted group.
+- Registration of FIDO2 security keys may fail for some users if the FIDO2 Authentication method policy is targeted for a group and the overall Authentication methods policy has more than 20 groups configured. We're working on increasing the policy size limit and in the mean time recommend limiting the number of group targets to no more than 20.
## Next steps
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
In addition:
>You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications. If your organization uses Remote Desktop Gateway and the user is registered for a TOTP code along with Microsoft Authenticator push notifications, the user won't be able to meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to **Approve**/**Deny** push notifications with Microsoft Authenticator.
+This is because TOTP will be preferred over the **Approve**/**Deny** push notification and Remote Desktop Gateway doesn't provide the option to enter a verification code with Azure AD Multi-Factor Authentication. For more information, see [Configure accounts for two-step verification](howto-mfa-nps-extension-rdg.md#configure-accounts-for-two-step-verification).
### Apple Watch supported for Microsoft Authenticator
active-directory Howto Mfa Nps Extension Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md
The Network Policy and Access Services (NPS) gives organizations the ability to
Typically, organizations use NPS (RADIUS) to simplify and centralize the management of VPN policies. However, many organizations also use NPS to simplify and centralize the management of RD Desktop Connection Authorization Policies (RD CAPs).
-Organizations can also integrate NPS with Azure AD MFA to enhance security and provide a high level of compliance. This helps ensure that users establish two-step verification to sign in to the Remote Desktop Gateway. For users to be granted access, they must provide their username/password combination along with information that the user has in their control. This information must be trusted and not easily duplicated, such as a cell phone number, landline number, application on a mobile device, and so on. RDG currently supports phone call and push notifications from Microsoft authenticator app methods for 2FA. For more information about supported authentication methods see the section [Determine which authentication methods your users can use](howto-mfa-nps-extension.md#determine-which-authentication-methods-your-users-can-use).
+Organizations can also integrate NPS with Azure AD MFA to enhance security and provide a high level of compliance. This helps ensure that users establish two-step verification to sign in to the Remote Desktop Gateway. For users to be granted access, they must provide their username/password combination along with information that the user has in their control. This information must be trusted and not easily duplicated, such as a cell phone number, landline number, application on a mobile device, and so on. RDG currently supports phone call and **Approve**/**Deny** push notifications from Microsoft authenticator app methods for 2FA. For more information about supported authentication methods see the section [Determine which authentication methods your users can use](howto-mfa-nps-extension.md#determine-which-authentication-methods-your-users-can-use).
Prior to the availability of the NPS extension for Azure, customers who wished to implement two-step verification for integrated NPS and Azure AD MFA environments had to configure and maintain a separate MFA Server in the on-premises environment as documented in [Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS](howto-mfaserver-nps-rdg.md).
Once an account has been enabled for MFA, you cannot sign in to resources govern
Follow the steps in [What does Azure AD Multi-Factor Authentication mean for me?](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) to understand and properly configure your devices for MFA with your user account. > [!IMPORTANT]
-> The sign-in behavior for Remote Desktop Gateway doesn't provide the option to enter a verification code with Azure AD Multi-Factor Authentication. A user account must be configured for phone verification or the Microsoft Authenticator App with push notifications.
+> The sign-in behavior for Remote Desktop Gateway doesn't provide the option to enter a verification code with Azure AD Multi-Factor Authentication. A user account must be configured for phone verification or the Microsoft Authenticator App with **Approve**/**Deny** push notifications.
>
-> If neither phone verification or the Microsoft Authenticator App with push notifications is configured for a user, the user won't be able to complete the Azure AD Multi-Factor Authentication challenge and sign in to Remote Desktop Gateway.
+> If neither phone verification or the Microsoft Authenticator App with **Approve**/**Deny** push notifications is configured for a user, the user won't be able to complete the Azure AD Multi-Factor Authentication challenge and sign in to Remote Desktop Gateway.
> > The SMS text method doesn't work with Remote Desktop Gateway because it doesn't provide the option to enter a verification code.
The image below from Microsoft Message Analyzer shows network traffic filtered o
[Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS](howto-mfaserver-nps-rdg.md)
-[Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)
+[Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)
active-directory Custom Claims Provider Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-claims-provider-overview.md
Previously updated : 03/31/2023 Last updated : 04/10/2023
When a user authenticates to an application, a custom claims provider can be use
Key data about a user is often stored in systems external to Azure AD. For example, secondary email, billing tier, or sensitive information. Some applications may rely on these attributes for the application to function as designed. For example, the application may block access to certain features based on a claim in the token. The following short video provides an excellent overview of the Azure AD custom extensions and custom claims providers:
-> [!VIDEO https://www.youtube.com/embed/BYOMshjlwbc]
+> [!VIDEO https://www.youtube.com/embed/1tPA7B9ztz0]
Use a custom claims provider for the following scenarios:
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
Previously updated : 03/31/2023 Last updated : 04/10/2023
This article describes how to configure and setup a custom claims provider with
This how-to guide demonstrates the token issuance start event with a REST API running in Azure Functions and a sample OpenID Connect application. Before you start, take a look at following video, which demonstrates how to configure Azure AD custom claims provider with Function App:
-> [!VIDEO https://www.youtube.com/embed/r-JEsMBJ7GE]
+> [!VIDEO https://www.youtube.com/embed/fxQGVIwX8_4]
## Prerequisites
To test your custom claim provider, follow these steps:
- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article. -- Learn how to [troubleshoot your custom extensions API](custom-extension-troubleshoot.md).
+- Learn how to [troubleshoot your custom extensions API](custom-extension-troubleshoot.md).
active-directory Alvao Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alvao-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure ALVAO for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to ALVAO.
++
+writer: twimmers
+
+ms.assetid: a72aa8af-28e0-4378-9d74-59b128c9cf16
++++ Last updated : 04/10/2023+++
+# Tutorial: Configure ALVAO for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both ALVAO and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [ALVAO](https://www.alvao.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in ALVAO.
+> * Remove users in ALVAO when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and ALVAO.
+> * Provision groups and group memberships in ALVAO.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in ALVAO with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and ALVAO](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure ALVAO to support provisioning with Azure AD
+Contact ALVAO support to configure ALVAO to support provisioning with Azure AD.
+
+## Step 3. Add ALVAO from the Azure AD application gallery
+
+Add ALVAO from the Azure AD application gallery to start managing provisioning to ALVAO. If you have previously setup ALVAO for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to ALVAO
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for ALVAO in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **ALVAO**.
+
+ ![Screenshot of the ALVAO link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your ALVAO Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to ALVAO. If the connection fails, ensure your ALVAO account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to ALVAO**.
+
+1. Review the user attributes that are synchronized from Azure AD to ALVAO in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in ALVAO for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the ALVAO API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by ALVAO|
+ |||||
+ |userName|String|✓|✓
+ |externalId|String|✓|✓
+ |active|Boolean||✓
+ |displayName|String||✓
+ |title|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |name.formatted|String||
+ |addresses[type eq "work"].formatted|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].country|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |phoneNumbers[type eq "fax"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to ALVAO**.
+
+1. Review the group attributes that are synchronized from Azure AD to ALVAO in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in ALVAO for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by ALVAO|
+ |||||
+ |displayName|String|✓|✓
+ |externalId|String||
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for ALVAO, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to ALVAO by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Better Stack Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/better-stack-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Better Stack for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Better Stack.
++
+writer: twimmers
+
+ms.assetid: ceb66a35-ca28-4a43-b9be-c8074cd406ff
++++ Last updated : 04/10/2023+++
+# Tutorial: Configure Better Stack for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Better Stack and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Better Stack](https://betterstack.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Better Stack.
+> * Remove users in Better Stack when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Better Stack.
+> * Provision groups and group memberships in Better Stack.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Better Stack (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Better Stack with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Better Stack](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Better Stack to support provisioning with Azure AD
+Contact Better Stack support to configure Better Stack to support provisioning with Azure AD.
+
+## Step 3. Add Better Stack from the Azure AD application gallery
+
+Add Better Stack from the Azure AD application gallery to start managing provisioning to Better Stack. If you have previously setup Better Stack for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Better Stack
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Better Stack in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Better Stack**.
+
+ ![Screenshot of the Better Stack link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Better Stack Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Better Stack. If the connection fails, ensure your Better Stack account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Better Stack**.
+
+1. Review the user attributes that are synchronized from Azure AD to Better Stack in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Better Stack for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Better Stack API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Better Stack|
+ |||||
+ |userName|String|✓|✓
+ |active|Boolean||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |externalId|String||
+ |timezone|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Better Stack**.
+
+1. Review the group attributes that are synchronized from Azure AD to Better Stack in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Better Stack for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Better Stack|
+ |||||
+ |displayName|String|✓|✓
+ |externalId|String||
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Better Stack, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Better Stack by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Kno2fy Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kno2fy-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Kno2fy for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Kno2fy.
++
+writer: twimmers
+
+ms.assetid: 68cc23f3-000b-421b-9d23-41f2eb5db521
++++ Last updated : 03/31/2023+++
+# Tutorial: Configure Kno2fy for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Kno2fy and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Kno2fy](https://www.kno2.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Kno2fy.
+> * Remove users in Kno2fy when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Kno2fy.
+> * Provision groups and group memberships in Kno2fy
+> * [Single sign-on](kno2fy-tutorial.md) to Kno2fy (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* One or more Kno2 organizations that have the provisioning service enabled.
+* A Kno2 administrator account with permission to manage the organizations that should have their users managed through Azure AD.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Kno2fy](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Kno2fy to support provisioning with Azure AD
+1. Provisioning with Azure AD is intended for use with Single Sign on using Azure AD as the Identity Provider. Enable Single Sign on for the Kno2fy Application in Azure AD and add the Azure AD Identity Provider by adding the appropriate issuer value in the Kno2 settings for your organization(s).
+1. A Kno2 team member will assist in acquiring a provisioning token and the URL for use with the service. Save these values for use in Step 5.
+
+## Step 3. Add Kno2fy from the Azure AD application gallery
+
+Add Kno2fy from the Azure AD application gallery to start managing provisioning to Kno2fy. If you have previously setup Kno2fy for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Kno2fy
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Kno2fy in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Kno2fy**.
+
+ ![Screenshot of the Kno2fy link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Kno2fy Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Kno2fy. If the connection fails, ensure your Kno2fy account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Kno2fy**.
+
+1. Review the user attributes that are synchronized from Azure AD to Kno2fy in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Kno2fy for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Kno2fy API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Kno2fy|
+ |||||
+ |userName|String|✓|✓
+ |active|Boolean||✓
+ |displayName|String||✓
+ |emails[type eq "work"].value|String||✓
+ |name.givenName|String||✓
+ |name.familyName|String||✓
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Kno2fy**.
+
+1. Review the group attributes that are synchronized from Azure AD to Kno2fy in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Kno2fy for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Kno2fy|
+ |||||
+ |displayName|String|✓|✓
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Kno2fy, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Kno2fy by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Advisor uses machine-learning algorithms to identify low utilization and to iden
Advisor identifies resources that haven't been used at all over the last 7 days and makes a recommendation to shut them down. - Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** isn't considered since we've found that **CPU** and **Outbound Network utilization** are sufficient.-- The last 7 days of utilization data are analyzed
+- The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations.
- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics across instances. - A shutdown recommendation is created if: - P95th of the maximum value of CPU utilization summed across all cores is less than 3%.
Advisor identifies resources that haven't been used at all over the last 7 days
Advisor recommends resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which is less expensive (based on retail rates). On virtual machine scale sets, Advisor recommends resizing when it's possible to fit the current load on a more appropriate cheaper SKU, or a lower number of instances of the same SKU. - Recommendation criteria include **CPU**, **Memory** and **Outbound Network utilization**. -- The last 7 days of utilization data are analyzed
+- The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations.
- Metrics are sampled every 30 seconds, aggregated to 1 minute and then further aggregated to 30 minutes (taking the max of average values while aggregating to 30 minutes). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics for instance count recommendations, and aggregated using the max of the metrics for SKU change recommendations. - An appropriate SKU (for virtual machines) or instance count (for virtual machine scale set resources) is determined based on the following criteria: - Performance of the workloads on the new SKU shouldn't be impacted.
A burstable SKU recommendation is made if:
- The average **CPU utilization** is less than a burstable SKUs' baseline performance - If the P95 of CPU is less than two times the burstable SKUs' baseline performance - If the current SKU doesn't have accelerated networking enabled, since burstable SKUs don't support accelerated networking yet
- - If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days
+ - If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days. Note that you can change your lookback period in the configurations.
The resulting recommendation suggests that a user should resize their current virtual machine or virtual machine scale set to a burstable SKU with the same number of cores. This suggestion is made so a user can take advantage of lower cost and also the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU.
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
The following are important considerations to evaluate:
```bash #!/bin/sh # Patch the Persistent Volume in case ReclaimPolicy is Delete
- namespace=$1
+ NAMESPACE=$1
i=1
- for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ for PVC in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
# Ignore first record as it contains header if [ $i -eq 1 ]; then i=$((i + 1)) else
- pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
- reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
- echo "Reclaim Policy for Persistent Volume $pv is $reclaimPolicy"
- if [[ $reclaimPolicy == "Delete" ]]; then
+ PV="$(kubectl get pvc $PVC -n $NAMESPACE -o jsonpath='{.spec.volumeName}')"
+ RECLAIMPOLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ echo "Reclaim Policy for Persistent Volume $PV is $RECLAIMPOLICY"
+ if [[ $RECLAIMPOLICY == "Delete" ]]; then
echo "Updating ReclaimPolicy for $pv to Retain"
- kubectl patch pv $pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ kubectl patch pv $PV -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
fi fi done
The following are important considerations to evaluate:
#!/bin/sh #kubectl get pvc -n <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage # TimeFormat 2022-04-20T13:19:56Z
- namespace=$1
- fileName=$(date +%Y%m%d%H%M)-$namespace
- existingStorageClass=$2
- storageClassNew=$3
- starttimestamp=$4
- endtimestamp=$5
+ NAMESPACE=$1
+ FILENAME=$(date +%Y%m%d%H%M)-$NAMESPACE
+ EXISTING_STORAGE_CLASS=$2
+ STORAGE_CLASS_NEW=$3
+ STARTTIMESTAMP=$4
+ ENDTIMESTAMP=$5
i=1
- for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ for PVC in $(kubectl get pvc -n $NAMESPACE | awk '{ print $1}'); do
# Ignore first record as it contains header if [ $i -eq 1 ]; then i=$((i + 1)) else
- pvcCreationTime=$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.metadata.creationTimestamp}')
- if [[ $pvcCreationTime > $starttimestamp ]]; then
- if [[ $endtimestamp > $pvcCreationTime ]]; then
- pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
- reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
- storageClass="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.storageClassName}')"
- echo $pvc
- reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
- if [[ $reclaimPolicy == "Retain" ]]; then
- if [[ $storageClass == $existingStorageClass ]]; then
- storageSize="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.capacity.storage}')"
- skuName="$(kubectl get storageClass $storageClass -o jsonpath='{.reclaimPolicy}')"
- diskURI="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.azureDisk.diskURI}')"
- persistentVolumeReclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ PVC_CREATION_TIME=$(kubectl get pvc $PVC -n $NAMESPACE -o jsonpath='{.metadata.creationTimestamp}')
+ if [[ $PVC_CREATION_TIME > $STARTTIMESTAMP ]]; then
+ if [[ $ENDTIMESTAMP > $PVC_CREATION_TIME ]]; then
+ PV="$(kubectl get pvc $PVC -n $NAMESPACE -o jsonpath='{.spec.volumeName}')"
+ RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ STORAGECLASS="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.storageClassName}')"
+ echo $PVC
+ RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ if [[ $RECLAIM_POLICY == "Retain" ]]; then
+ if [[ $STORAGECLASS == $EXISTING_STORAGE_CLASS ]]; then
+ STORAGE_SIZE="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.capacity.storage}')"
+ SKU_NAME="$(kubectl get storageClass $STORAGECLASS -o jsonpath='{.reclaimPolicy}')"
+ DISK_URI="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.azureDisk.diskURI}')"
+ PERSISTENT_VOLUME_RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
- cat >$pvc-csi.yaml <<EOF
+ cat >$PVC-csi.yaml <<EOF
apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: disk.csi.azure.com
- name: $pv-csi
+ name: $PV-csi
spec: accessModes: - ReadWriteOnce capacity:
- storage: $storageSize
+ storage: $STORAGE_SIZE
claimRef: apiVersion: v1 kind: PersistentVolumeClaim
- name: $pvc-csi
- namespace: $namespace
+ name: $PVC-csi
+ namespace: $NAMESPACE
csi: driver: disk.csi.azure.com volumeAttributes:
- csi.storage.k8s.io/pv/name: $pv-csi
- csi.storage.k8s.io/pvc/name: $pvc-csi
- csi.storage.k8s.io/pvc/namespace: $namespace
- requestedsizegib: "$storageSize"
- skuname: $skuName
- volumeHandle: $diskURI
- persistentVolumeReclaimPolicy: $persistentVolumeReclaimPolicy
- storageClassName: $storageClassNew
+ csi.storage.k8s.io/pv/name: $PV-csi
+ csi.storage.k8s.io/pvc/name: $PVC-csi
+ csi.storage.k8s.io/pvc/namespace: $NAMESPACE
+ requestedsizegib: "$STORAGE_SIZE"
+ skuname: $SKU_NAME
+ volumeHandle: $DISK_URI
+ persistentVolumeReclaimPolicy: $PERSISTENT_VOLUME_RECLAIM_POLICY
+ storageClassName: $STORAGE_CLASS_NEW
apiVersion: v1 kind: PersistentVolumeClaim metadata:
- name: $pvc-csi
- namespace: $namespace
+ name: $PVC-csi
+ namespace: $NAMESPACE
spec: accessModes: - ReadWriteOnce
- storageClassName: $storageClassNew
+ storageClassName: $STORAGE_CLASS_NEW
resources: requests:
- storage: $storageSize
- volumeName: $pv-csi
+ storage: $STORAGE_SIZE
+ volumeName: $PV-csi
EOF
- kubectl apply -f $pvc-csi.yaml
- line="PVC:$pvc,PV:$pv,StorageClassTarget:$storageClassNew"
- printf '%s\n' "$line" >>$fileName
+ kubectl apply -f $PVC-csi.yaml
+ LINE="PVC:$PVC,PV:$PV,StorageClassTarget:$STORAGE_CLASS_NEW"
+ printf '%s\n' "$LINE" >>$FILENAME
fi fi fi
Before proceeding, verify the following:
#!/bin/sh #kubectl get pvc -n <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage # TimeFormat 2022-04-20T13:19:56Z
- namespace=$1
- fileName=$namespace-$(date +%Y%m%d%H%M)
- existingStorageClass=$2
- storageClassNew=$3
- volumestorageClass=$4
- starttimestamp=$5
- endtimestamp=$6
+ NAMESPACE=$1
+ FILENAME=$NAMESPACE-$(date +%Y%m%d%H%M)
+ EXISTING_STORAGE_CLASS=$2
+ STORAGE_CLASS_NEW=$3
+ VOLUME_STORAGE_CLASS=$4
+ START_TIME_STAMP=$5
+ END_TIME_STAMP=$6
i=1
- for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ for PVC in $(kubectl get pvc -n $NAMESPACE | awk '{ print $1}'); do
# Ignore first record as it contains header if [ $i -eq 1 ]; then i=$((i + 1)) else
- pvcCreationTime=$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.metadata.creationTimestamp}')
- if [[ $pvcCreationTime > $starttimestamp ]]; then
- if [[ $endtimestamp > $pvcCreationTime ]]; then
- pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
- reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
- storageClass="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.storageClassName}')"
- echo $pvc
- reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
- if [[ $storageClass == $existingStorageClass ]]; then
- storageSize="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.capacity.storage}')"
- skuName="$(kubectl get storageClass $storageClass -o jsonpath='{.reclaimPolicy}')"
- diskURI="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.azureDisk.diskURI}')"
- targetResourceGroup="$(cut -d'/' -f5 <<<"$diskURI")"
- echo $diskURI
- echo $targetResourceGroup
- persistentVolumeReclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
- az snapshot create --resource-group $targetResourceGroup --name $pvc-$fileName --source "$diskURI"
- snapshotPath=$(az snapshot list --resource-group $targetResourceGroup --query "[?name == '$pvc-$fileName'].id | [0]")
- snapshotHandle=$(echo "$snapshotPath" | tr -d '"')
- echo $snapshotHandle
+ PVC_CREATION_TIME=$(kubectl get pvc $PVC -n $NAMESPACE -o jsonpath='{.metadata.creationTimestamp}')
+ if [[ $PVC_CREATION_TIME > $START_TIME_STAMP ]]; then
+ if [[ $END_TIME_STAMP > $PVC_CREATION_TIME ]]; then
+ PV="$(kubectl get pvc $PVC -n $NAMESPACE -o jsonpath='{.spec.volumeName}')"
+ RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ STORAGE_CLASS="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.storageClassName}')"
+ echo $PVC
+ RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ if [[ $STORAGE_CLASS == $EXISTING_STORAGE_CLASS ]]; then
+ STORAGE_SIZE="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.capacity.storage}')"
+ SKU_NAME="$(kubectl get storageClass $STORAGE_CLASS -o jsonpath='{.reclaimPolicy}')"
+ DISK_URI="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.azureDisk.diskURI}')"
+ TARGET_RESOURCE_GROUP="$(cut -d'/' -f5 <<<"$DISK_URI")"
+ echo $DISK_URI
+ echo $TARGET_RESOURCE_GROUP
+ PERSISTENT_VOLUME_RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ az snapshot create --resource-group $TARGET_RESOURCE_GROUP --name $PVC-$FILENAME --source "$DISK_URI"
+ SNAPSHOT_PATH=$(az snapshot list --resource-group $TARGET_RESOURCE_GROUP --query "[?name == '$PVC-$FILENAME'].id | [0]")
+ SNAPSHOT_HANDLE=$(echo "$SNAPSHOT_PATH" | tr -d '"')
+ echo $SNAPSHOT_HANDLE
sleep 10 # Create Restore File
- cat <<EOF >$pvc-csi.yml
+ cat <<EOF >$PVC-csi.yml
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata:
- name: $pvc-$fileName
+ name: $PVC-$FILENAME
spec: deletionPolicy: 'Delete' driver: 'disk.csi.azure.com'
- volumeSnapshotClassName: $volumestorageClass
+ volumeSnapshotClassName: $VOLUME_STORAGE_CLASS
source:
- snapshotHandle: $snapshotHandle
+ snapshotHandle: $SNAPSHOT_HANDLE
volumeSnapshotRef: apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot
- name: $pvc-$fileName
+ name: $PVC-$FILENAME
namespace: $1 apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata:
- name: $pvc-$fileName
+ name: $PVC-$FILENAME
namespace: $1 spec:
- volumeSnapshotClassName: $volumestorageClass
+ volumeSnapshotClassName: $VOLUME_STORAGE_CLASS
source:
- volumeSnapshotContentName: $pvc-$fileName
+ volumeSnapshotContentName: $PVC-$FILENAME
apiVersion: v1 kind: PersistentVolumeClaim metadata:
- name: csi-$pvc
+ name: csi-$PVC
namespace: $1 spec: accessModes: - ReadWriteOnce
- storageClassName: $storageClassNew
+ storageClassName: $STORAGE_CLASS_NEW
resources: requests:
- storage: $storageSize
+ storage: $STORAGE_SIZE
dataSource:
- name: $pvc-$fileName
+ name: $PVC-$FILENAME
kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io EOF
- kubectl create -f $pvc-csi.yml
- line="OLDPVC:$pvc,OLDPV:$pv,VolumeSnapshotContent:volumeSnapshotContent-$fileName,VolumeSnapshot:volumesnapshot$fileName,OLDdisk:$diskURI"
- printf '%s\n' "$line" >>$fileName
+ kubectl create -f $PVC-csi.yml
+ LINE="OLDPVC:$PVC,OLDPV:$PV,VolumeSnapshotContent:volumeSnapshotContent-$FILENAME,VolumeSnapshot:volumesnapshot$FILENAME,OLDdisk:$DISK_URI"
+ printf '%s\n' "$LINE" >>$FILENAME
fi fi fi
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Azure AD workload identity (preview) is supported on both Windows and Linux clus
1. Use the Azure CLI `az account set` command to set a specific subscription to be the current active subscription. Then use the `az identity create` command to create a managed identity. ```azurecli
- export subscriptionID=<subscription id>
- export resourceGroupName=<resource group name>
+ export SUBSCRIPTION_ID=<subscription id>
+ export RESOURCE_GROUP=<resource group name>
export UAMI=<name for user assigned identity> export KEYVAULT_NAME=<existing keyvault name>
- export clusterName=<aks cluster name>
+ export CLUSTER_NAME=<aks cluster name>
- az account set --subscription $subscriptionID
- az identity create --name $UAMI --resource-group $resourceGroupName
- export USER_ASSIGNED_CLIENT_ID="$(az identity show -g $resourceGroupName --name $UAMI --query 'clientId' -o tsv)"
- export IDENTITY_TENANT=$(az aks show --name $clusterName --resource-group $resourceGroupName --query identity.tenantId -o tsv)
+ az account set --subscription $SUBSCRIPTION_ID
+ az identity create --name $UAMI --resource-group $RESOURCE_GROUP
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show -g $RESOURCE_GROUP --name $UAMI --query 'clientId' -o tsv)"
+ export IDENTITY_TENANT=$(az aks show --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --query identity.tenantId -o tsv)
``` 2. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. The rights are assigned using the `az keyvault set-policy` command shown below.
Azure AD workload identity (preview) is supported on both Windows and Linux clus
3. Run the [az aks show][az-aks-show] command to get the AKS cluster OIDC issuer URL. ```bash
- export AKS_OIDC_ISSUER="$(az aks show --resource-group $resourceGroupName --name $clusterName --query "oidcIssuerProfile.issuerUrl" -o tsv)"
+ export AKS_OIDC_ISSUER="$(az aks show --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --query "oidcIssuerProfile.issuerUrl" -o tsv)"
echo $AKS_OIDC_ISSUER ```
Azure AD workload identity (preview) is supported on both Windows and Linux clus
4. Establish a federated identity credential between the Azure AD application and the service account issuer and subject. Get the object ID of the Azure AD application. Update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace. ```bash
- export serviceAccountName="workload-identity-sa" # sample name; can be changed
- export serviceAccountNamespace="default" # can be changed to namespace of your workload
+ export SERVICE_ACCOUNT_NAME="workload-identity-sa" # sample name; can be changed
+ export SERVICE_ACCOUNT_NAMESPACE="default" # can be changed to namespace of your workload
cat <<EOF | kubectl apply -f - apiVersion: v1
Azure AD workload identity (preview) is supported on both Windows and Linux clus
azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID} labels: azure.workload.identity/use: "true"
- name: ${serviceAccountName}
- namespace: ${serviceAccountNamespace}
+ name: ${SERVICE_ACCOUNT_NAME}
+ namespace: ${SERVICE_ACCOUNT_NAMESPACE}
EOF ``` Next, use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the Managed Identity, the service account issuer, and the subject. ```bash
- export federatedIdentityName="aksfederatedidentity" # can be changed as needed
- az identity federated-credential create --name $federatedIdentityName --identity-name $UAMI --resource-group $resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${serviceAccountNamespace}:${serviceAccountName}
+ export FEDERATED_IDENTITY_NAME="aksfederatedidentity" # can be changed as needed
+ az identity federated-credential create --name $FEDERATED_IDENTITY_NAME --identity-name $UAMI --resource-group $RESOURCE_GROUP --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}
``` 5. Deploy a `SecretProviderClass` by using the following YAML script, noticing that the variables will be interpolated:
Azure AD workload identity (preview) is supported on both Windows and Linux clus
metadata: name: busybox-secrets-store-inline-user-msi spec:
- serviceAccountName: ${serviceAccountName}
+ serviceAccountName: ${SERVICE_ACCOUNT_NAME}
containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Global Azure cloud is supported with Arc support on the following regions:
| | -- | -- | | `australiaeast` | :heavy_check_mark: | :heavy_check_mark: | | `australiasoutheast` | :heavy_check_mark: | :x: |
+| `brazilsouth` | :heavy_check_mark: | :x: |
| `canadacentral` | :heavy_check_mark: | :heavy_check_mark: | | `canadaeast` | :heavy_check_mark: | :heavy_check_mark: | | `centralindia` | :heavy_check_mark: | :heavy_check_mark: |
Global Azure cloud is supported with Arc support on the following regions:
| `eastus2` | :heavy_check_mark: | :heavy_check_mark: | | `eastus2euap` | :x: | :heavy_check_mark: | | `francecentral` | :heavy_check_mark: | :heavy_check_mark: |
+| `francesouth` | :heavy_check_mark: | :x: |
| `germanywestcentral` | :heavy_check_mark: | :heavy_check_mark: | | `japaneast` | :heavy_check_mark: | :heavy_check_mark: |
+| `japanwest` | :heavy_check_mark: | :x: |
| `koreacentral` | :heavy_check_mark: | :heavy_check_mark: |
+| `koreasouth` | :heavy_check_mark: | :x: |
| `northcentralus` | :heavy_check_mark: | :heavy_check_mark: | | `northeurope` | :heavy_check_mark: | :heavy_check_mark: | | `norwayeast` | :heavy_check_mark: | :x: | | `southafricanorth` | :heavy_check_mark: | :x: | | `southcentralus` | :heavy_check_mark: | :heavy_check_mark: | | `southeastasia` | :heavy_check_mark: | :heavy_check_mark: |
+| `southindia` | :heavy_check_mark: | :x: |
| `swedencentral` | :heavy_check_mark: | :heavy_check_mark: | | `switzerlandnorth` | :heavy_check_mark: | :heavy_check_mark: |
+| `uaenorth` | :heavy_check_mark: | :x: |
| `uksouth` | :heavy_check_mark: | :heavy_check_mark: |
+| `ukwest` | :heavy_check_mark: | :x: |
| `westcentralus` | :heavy_check_mark: | :heavy_check_mark: | | `westeurope` | :heavy_check_mark: | :heavy_check_mark: | | `westus` | :heavy_check_mark: | :heavy_check_mark: |
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
na- Previously updated : 10/27/2021+ Last updated : 04/03/2023 +
To use Application Insights, [create an instance of the Application Insights ser
## Enable Application Insights logging for your API
+Use the following steps to enable Application Insights logging for an API. You can also enable Application Insights logging for all APIs.
+ 1. Navigate to your **Azure API Management service instance** in the **Azure portal**. 1. Select **APIs** from the menu on the left. 1. Click on your API, in this case **Demo Conference API**. If configured, select a version.+
+ > [!TIP]
+ > To enable logging for all APIs, select **All APIs**.
1. Go to the **Settings** tab from the top bar. 1. Scroll down to the **Diagnostics Logs** section. :::image type="content" source="media/api-management-howto-app-insights/apim-app-insights-api-1.png" alt-text="App Insights logger":::
To use Application Insights, [create an instance of the Application Insights ser
1. Input **100** as **Sampling (%)** and select the **Always log errors** checkbox. 1. Leave the rest of the settings as is. For details about the settings, see [Diagnostic logs settings reference](diagnostic-logs-reference.md).
- > [!WARNING]
- > Overriding the default **Number of payload bytes to log** value **0** may significantly decrease the performance of your APIs.
+ > [!WARNING]
+ > Overriding the default **Number of payload bytes to log** value **0** may significantly decrease the performance of your APIs.
1. Select **Save**. 1. Behind the scenes, a [Diagnostic](/rest/api/apimanagement/current-ga/diagnostic/create-or-update) entity named `applicationinsights` is created at the API level.
You can specify loggers on different levels:
+ A logger for all APIs Specifying *both*:-- By default, the single API logger (more granular level) will override the one for all APIs.
+- By default, the single API logger (more granular level) overrides the one for all APIs.
- If the loggers configured at the two levels are different, and you need both loggers to receive telemetry (multiplexing), please contact Microsoft Support. ## What data is added to Application Insights
Application Insights receives:
| *Exception* | For every failed request: <ul><li>Failed because of a closed client connection</li><li>Triggered an *on-error* section of the API policies</li><li>Has a response HTTP status code matching 4xx or 5xx</li></ul> | | *Trace* | If you configure a [trace](trace-policy.md) policy. <br /> The `severity` setting in the `trace` policy must be equal to or greater than the `verbosity` setting in the Application Insights logging. |
-### Emit custom metrics
-You can emit custom metrics by configuring the [`emit-metric`](emit-metric-policy.md) policy.
+> [!NOTE]
+> See [Application Insights limits](../azure-monitor/service-limits.md#application-insights) for information about the maximum size and number of metrics and events per Application Insights instance.
-To make Application Insights pre-aggregated metrics available in API Management, you'll need to manually enable custom metrics in the service.
-1. Use the [`emit-metric`](emit-metric-policy.md) policy with the [Create or Update API](/rest/api/apimanagement/current-ga/api-diagnostic/create-or-update).
-1. Add `"metrics":true` to the payload, along with any other properties.
+## Emit custom metrics
+You can emit [custom metrics](../azure-monitor/essentials/metrics-custom-overview.md) to Application Insights from your API Management instance. API Management emits custom metrics using the [emit-metric](emit-metric-policy.md) policy.
> [!NOTE]
-> See [Application Insights limits](../azure-monitor/service-limits.md#application-insights) for information about the maximum size and number of metrics and events per Application Insights instance.
+> Custom metrics are a preview feature of Azure Monitor and subject to limitations.
+
+To emit custom metrics, perform the following configuration steps.
+
+1. Enable **Custom metrics (Preview)** with custom dimensions in your Application Insights instance.
+
+ 1. Navigate to your Application Insights instance in the portal.
+ 1. In the left menu, select **Usage and estimated costs**.
+ 1. Select **Custom metrics (Preview)** > **With dimensions**.
+ 1. Select **OK**.
+
+1. Add the `"metrics": true` property to the `applicationInsights` diagnostic entity that's configured in API Management. Currently you must add this property using the API Management [Diagnostic - Create or Update](/rest/api/apimanagement/current-ga/diagnostic/create-or-update) REST API. For example:
+
+ ```http
+ PUT https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroupName}/providers/Microsoft.ApiManagement/service/{APIManagementServiceName}/diagnostics/applicationinsights
+
+ {
+ [...]
+ {
+ "properties": {
+ "loggerId": "/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroupName}/providers/Microsoft.ApiManagement/service/{APIManagementServiceName}/loggers/{ApplicationInsightsLoggerName}",
+ "metrics": true
+ [...]
+ }
+ }
+ ```
+1. Ensure that the Application Insights logger is configured at the scope you intend to emit custom metrics (either all APIs, or a single API). For more information, see [Enable Application Insights logging for your API](#enable-application-insights-logging-for-your-api), earlier in this article.
+1. Configure the `emit-metric` policy at a scope where Application Insights logging is configured (either all APIs, or a single API) and is enabled for custom metrics. For policy details, see the [`emit-metric`](emit-metric-policy.md) policy reference.
## Performance implications and log sampling
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
Title: Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in
description: This article provides information about deploying the extension-based User Hybrid Runbook Worker to run runbooks on Windows or Linux machines in your on-premises datacenter or other cloud environment. - Previously updated : 04/01/2023+ Last updated : 04/05/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers.
Azure Automation stores and manages runbooks and then delivers them to one or mo
| Windows (x64) | Linux (x64) | |||
-| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709, and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro | &#9679; Debian GNU/Linux 8, 9, 10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, and 8ΓÇ»</br> *Hybrid Worker extension would follow support timelines of the OS vendor.|
+| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709, and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 (excluding Server Core) <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro | &#9679; Debian GNU/Linux 8, 9, 10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, and 8ΓÇ»</br> *Hybrid Worker extension would follow support timelines of the OS vendor.|
### Other Requirements
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
Title: Migrate an existing agent-based hybrid workers to extension-based-workers
description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Last updated : 04/05/2023 Previously updated : 04/01/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
The purpose of the Extension-based approach is to simplify the installation and
| Windows (x64) | Linux (x64) | |||
-| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro| &#9679; Debian GNU/Linux 8,9,10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, and 8 </br> *Hybrid Worker extension would follow support timelines of the OS vendor.ΓÇ»|
+| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 (excluding Server Core) <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro| &#9679; Debian GNU/Linux 8,9,10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, and 8 </br> *Hybrid Worker extension would follow support timelines of the OS vendor.ΓÇ»|
### Other Requirements
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
spring.cloud.azure.appconfiguration.stores[0].endpoints[0]="<first-replica-endpo
spring.cloud.azure.appconfiguration.stores[0].endpoints[1]="<second-replica-endpoint>" ```
+**Connect with Connection String**
+
+```properties
+spring.cloud.azure.appconfiguration.stores[0].connection-strings[0]="${FIRST_REPLICA_CONNECTION_STRING}"
+spring.cloud.azure.appconfiguration.stores[0].connection-strings[1]="${SECOND_REPLICA_CONNECTION_STRING}"
+```
> [!NOTE]
-> The failover support is available if you use version of **4.0.0-beta.1** or later of any of the following packages.
+> The failover support is available if you use version of **4.7.0** or later of any of the following packages.
> - `spring-cloud-azure-appconfiguration-config` > - `spring-cloud-azure-appconfiguration-config-web` > - `spring-cloud-azure-starter-appconfiguration-config`
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Last updated 03/23/2023 + # Configure active geo-replication for Enterprise Azure Cache for Redis instances In this article, you learn how to configure an active geo-replicated cache using the Azure portal.
Active geo-replication groups up to five instances of Enterprise Azure Cache for
| |::|:-:|::| |Available | No | No | Yes | -
-|Tier | Available|
-|:|::|
-|Basic, Standard | No |
-|Premium | No |
-|Enterprise, Enterprise Flash| Yes |
-- The Premium tier of Azure Cache for Redis offers a version of geo-replication called [_passive geo-replication_](cache-how-to-geo-replication.md). Passive geo-replication provides an active-passive configuration. ## Active geo-replication prerequisites
Learn more about Azure Cache for Redis features.
* [Azure Cache for Redis service tiers](cache-overview.md#service-tiers) * [High availability for Azure Cache for Redis](cache-high-availability.md)++
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Previously updated : 03/24/2023 Last updated : 04/10/2023
With the Premium tier, you can't use Append-only File (AOF) persistence with mul
### How do I check if soft delete is enabled on my storage account?
-Select the storage account that your cache is using for persistence. Select **Data Protection** from the Resource menu. In the working pane, check the state of *Enable soft delete for blobs*.
+Select the storage account that your cache is using for persistence. Select **Data Protection** from the Resource menu. In the working pane, check the state of *Enable soft delete for blobs*. For more information on soft delete in Azure storage accounts, see [Enable soft delete for blobs](/azure/storage/blobs/soft-delete-blob-enable?tabs=azure-portal).
## Next steps
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
Title: Use Azure Maps Drawing Error Visualizer
-description: In this article, you'll learn about how to visualize warnings and errors returned by the Creator Conversion API.
+description: This article demonstrates how to visualize warnings and errors returned by the Creator Conversion API.
Last updated 02/17/2023
# Using the Azure Maps Drawing Error Visualizer with Creator
-The Drawing Error Visualizer is a stand-alone web application that displays [Drawing package warnings and errors](drawing-conversion-error-codes.md) detected during the conversion process. The Error Visualizer web application consists of a static page that you can use without connecting to the internet. You can use the Error Visualizer to fix errors and warnings in accordance with [Drawing package requirements](drawing-requirements.md). The [Azure Maps Conversion API](/rest/api/maps/v2/conversion) returns a response with a link to the Error Visualizer only when an error is detected.
+The *Drawing Error Visualizer* is a stand-alone web application that displays [Drawing package warnings and errors] detected during the conversion process. The Error Visualizer web application consists of a static page that you can use without connecting to the internet. You can use the Error Visualizer to fix errors and warnings in accordance with [Drawing package requirements]. The [Azure Maps Conversion API] returns a response with a link to the Error Visualizer only when an error is detected.
## Prerequisites
The Drawing Error Visualizer is a stand-alone web application that displays [Dra
* A [subscription key] * A [Creator resource]
-This tutorial uses the [Postman](https://www.postman.com/) application, but you may choose a different API development environment.
+This tutorial uses the [Postman] application, but you may choose a different API development environment.
## Download
-1. Upload your drawing package to the Azure Maps Creator service to obtain a `udid` for the uploaded package. For steps on how to upload a package, see [Upload a drawing package](tutorial-creator-indoor-maps.md#upload-a-drawing-package).
+1. Upload your drawing package to the Azure Maps Creator service to obtain a `udid` for the uploaded package. For steps on how to upload a package, see [Upload a drawing package].
-2. Now that the drawing package is uploaded, we'll use `udid` for the uploaded package to convert the package into map data. For steps on how to convert a package, see [Convert a drawing package](tutorial-creator-indoor-maps.md#convert-a-drawing-package).
+2. Now that the drawing package is uploaded, use `udid` for the uploaded package to convert the package into map data. For steps on how to convert a package, see [Convert a drawing package].
>[!NOTE] >If your conversion process succeeds, you will not receive a link to the Error Visualizer tool.
This tutorial uses the [Postman](https://www.postman.com/) application, but you
## Setup
-Inside the downloaded zipped package from the `diagnosticPackageLocation` link, you'll find two files.
+The downloaded zipped package from the `diagnosticPackageLocation` link contains the following two files.
* _VisualizationTool.zip_: Contains the source code, media, and web page for the Drawing Error Visualizer. * _ConversionWarningsAndErrors.json_: Contains a formatted list of warnings, errors, and other details that are used by the Drawing Error Visualizer.
Unzip the _VisualizationTool.zip_ folder. It contains the following items:
* _static_ folder: source code * _https://docsupdatetracker.net/index.html_ file: the web application.
-Open the _https://docsupdatetracker.net/index.html_ file using any of the browsers below, with the respective version number. You may use a different version, if the version offers equally compatible behavior as the listed version.
+Open the _https://docsupdatetracker.net/index.html_ file using any of the following browsers, with the respective version number. You may use a different version, if the version offers equally compatible behavior as the listed version.
* Microsoft Edge 80 * Safari 13
After launching the Drawing Error Visualizer tool, you'll be presented with the
:::image type="content" source="./media/drawing-errors-visualizer/start-page.png" alt-text="Drawing Error Visualizer App - Start Page":::
-The _ConversionWarningsAndErrors.json_ file has been placed at the root of the downloaded directory. To load the _ConversionWarningsAndErrors.json_, drag & drop the file onto the box. Or, click on the box, find the file in the `File Explorer dialogue`, and upload the file.
+The _ConversionWarningsAndErrors.json_ file has been placed at the root of the downloaded directory. To load the _ConversionWarningsAndErrors.json_, drag & drop the file onto the box. Or, select on the box, find the file in the `File Explorer dialogue`, and upload the file.
:::image type="content" source="./media/drawing-errors-visualizer/loading-data.gif" alt-text="Drawing Error Visualizer App - Drag and drop to load data":::
-Once the _ConversionWarningsAndErrors.json_ file loads, you'll see a list of your drawing package errors and warnings. Each error or warning is specified by the layer, level, and a detailed message. To view detailed information about an error or warning, click on the **Details** link. An intractable section will then appear below the list. You may now navigate to each error to learn more details on how to resolve the error.
+The _ConversionWarningsAndErrors.json_ contains a list of your drawing package errors and warnings. To view detailed information about an error or warning, select the **Details** link. An intractable section appears below the list. You may now navigate to each error to learn more details on how to resolve the error.
:::image type="content" source="./media/drawing-errors-visualizer/errors.png" alt-text="Drawing Error Visualizer App - Errors and Warnings":::
Once the _ConversionWarningsAndErrors.json_ file loads, you'll see a list of you
Learn more by reading: > [!div class="nextstepaction"]
-> [Creator for indoor maps](creator-indoor-maps.md)
+> [Creator for indoor maps]
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps Conversion API]: /rest/api/maps/v2/conversion
+[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package
+[Creator for indoor maps]: creator-indoor-maps.md
+[Creator resource]: how-to-manage-creator.md
+[Drawing package requirements]: drawing-requirements.md
+[Drawing package warnings and errors]: drawing-conversion-error-codes.md
+[Postman]: https://www.postman.com/
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Creator resource]: how-to-manage-creator.md
+[Upload a drawing package]: tutorial-creator-indoor-maps.md#upload-a-drawing-package
azure-maps Drawing Tools Interactions Keyboard Shortcuts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-interactions-keyboard-shortcuts.md
This article outlines all the different ways to draw and edit shapes using a mou
The drawing manager supports three different ways of interacting with the map, to draw shapes.
-* `click` - Coordinates are added when the mouse or touch is clicked.
-* `freehand ` - Coordinates are added when the mouse or touch is dragged on the map.
-* `hybrid` - Coordinates are added when the mouse or touch is clicked or dragged.
+- `click` - Coordinates are added when the mouse or touch is clicked.
+- `freehand ` - Coordinates are added when the mouse or touch is dragged on the map.
+- `hybrid` - Coordinates are added when the mouse or touch is clicked or dragged.
## How to draw shapes
- Before any shape can be drawn, set the `drawingMode` option of the drawing manager to a supported drawing setting. This setting can be programmed, or invoked by pressing one of the drawing buttons on the toolbar. The drawing mode stays enabled, even after a shape has been drawn, making it easy to draw additional shapes of the same type. Programmatically set the drawing mode to an idle state. Or, switch to an idle state by clicking the current drawing modes button on the toolbar.
+ Before any shape can be drawn, set the `drawingMode` option of the drawing manager to a supported drawing setting. This setting can be programmed, or invoked by pressing one of the drawing buttons on the toolbar. The drawing mode stays enabled, even after a shape has been drawn, making it easy to draw more shapes of the same type. Programmatically set the drawing mode to an idle state. Or, switch to an idle state by clicking the current drawing modes button on the toolbar.
The next sections outline all the different ways that shapes can be drawn on the map.
The next sections outline all the different ways that shapes can be drawn on the
When the drawing manager is in `draw-point` drawing mode, the following actions can be done to draw points on the map. These methods work with all interaction modes. **Start drawing**
-
+
+- Select the left mouse button, or touch the map to add a point to the map.
+- If the mouse is over the map, press the `F` key, and a point is added at the coordinate of the mouse pointer. This method provides higher accuracy for adding a point to the map. There's less movement on the mouse due to the pressing motion of the left mouse button.
+- Keep clicking, touching, or pressing `F` to add more points to the map.
+ **Finish drawing**+
+- Select any button in the drawing toolbar.
+- Programmatically set the drawing mode.
+- Press the `C` key.
**Cancel drawing**+
+- Press the `Escape` key.
### How to draw a line When the drawing manager is in `draw-line` mode, the following actions can be done to draw points on the map, depending on the interaction mode. **Start drawing**
- * Click the left mouse button, or touch the map to add each point of a line on the map. A coordinate is added to the line for each click or touch.
- * If the mouse is over the map, press the `F` key, and a point will be added at the coordinate of the mouse pointer. This method provides higher accuracy for adding a point to the map. There will be less movement on the mouse due to the pressing motion of the left mouse button.
- * Keep clicking until all the desired points have been added to the line.
- * Press down the left mouse button, or touch-down on the map and drag the mouse, or touch point around. Coordinates are added to the line as the mouse or touch point moves around the map. As soon as the mouse or touch-up event is triggered, the drawing is completed. The frequency at which coordinates are added is defined by the drawing managers `freehandInterval` option.
- * Alternate between click and freehand methods, as desired, while drawing a single line. For example, click a few points, then hold and drag the mouse to add a bunch of points, then click a few more.
+
+- Click mode
+ - Select left mouse button, or touch the map to add each point of a line on the map. A coordinate is added to the line for each click or touch.
+ - If the mouse is over the map, press the `F` key, and a point is added at the coordinate of the mouse pointer. This method provides higher accuracy for adding a point to the map. There's less movement on the mouse due to the pressing motion of the left mouse button.
+ - Keep clicking until all the desired points have been added to the line.
+- Freehand mode
+ - Press down the left mouse button, or touch-down on the map and drag the mouse, or touch point around. Coordinates are added to the line as the mouse or touch point moves around the map. As soon as the mouse or touch-up event is triggered, the drawing is completed. The drawing managers `freehandInterval` option defines the frequency at which coordinates are added.
+- Hybrid mode
+ - Alternate between click and freehand methods, as desired, while drawing a single line. For example, click a few points, then hold and drag the mouse to add a bunch of points, then click a few more.
**Finish drawing**
- * Double-click the map at the last point.
- * Click on any button in the drawing toolbar.
- * Programmatically set the drawing mode.
- * Release the mouse button or touch point.
+
+- Hybrid/Click mode
+ - Double-click the map at the last point.
+ - Click on any button in the drawing toolbar.
+ - Programmatically set the drawing mode.
+- Freehand mode
+ - Release the mouse button or touch point.
+- Press the `C` key.
**Cancel drawing**+
+- Press the `Escape` key.
### How to draw a polygon When the drawing manager is in `draw-polygon` mode, the following actions can be done to draw points on the map, depending on the interaction mode. **Start drawing**
- * Click the left mouse button, or touch the map to add each point of a polygon on the map. A coordinate is added to the polygon for each click or touch.
- * If the mouse is over the map, press the `F` key, and a point will be added at the coordinate of the mouse pointer. This method provides higher accuracy for adding a point to the map. There will be less movement on the mouse due to the pressing motion of the left mouse button.
- * Keep clicking until all the desired points have been added to the polygon.
- * Press down the left mouse button, or touch-down on the map and drag the mouse, or touch point around. Coordinates are added to the polygon as the mouse or touch point moves around the map. As soon as the mouse or touch-up event is triggered, the drawing is completed. The frequency at which coordinates are added is defined by the drawing managers `freehandInterval` option.
- * Alternate between click and freehand methods, as desired, while drawing a single polygon. For example, click a few points, then hold and drag the mouse to add a bunch of points, then click a few more.
+
+- Click mode
+ - Select the left mouse button, or touch the map to add each point of a polygon on the map. A coordinate is added to the polygon for each click or touch.
+ - If the mouse is over the map, press the `F` key, and a point is added at the coordinate of the mouse pointer. This method provides higher accuracy for adding a point to the map. There's less movement on the mouse due to the pressing motion of the left mouse button.
+ - Keep clicking until all the desired points have been added to the polygon.
+- Freehand mode
+ - Press down the left mouse button, or touch-down on the map and drag the mouse, or touch point around. Coordinates are added to the polygon as the mouse or touch point moves around the map. As soon as the mouse or touch-up event is triggered, the drawing is completed. The drawing managers `freehandInterval` option defines the frequency at which coordinates are added.
+- Hybrid mode
+ - Alternate between click and freehand methods, as desired, while drawing a single polygon. For example, click a few points, then hold and drag the mouse to add a bunch of points, then click a few more.
**Finish drawing**
- * Double-click the map at the last point.
- * Click on the first point in the polygon.
- * Click on any button in the drawing toolbar.
- * Programmatically set the drawing mode.
- * Release the mouse button or touch point.
+
+- Hybrid/Click mode
+ - Double-click the map at the last point.
+ - Click on the first point in the polygon.
+ - Click on any button in the drawing toolbar.
+ - Programmatically set the drawing mode.
+- Freehand mode
+ - Release the mouse button or touch point.
+- Press the `C` key.
**Cancel drawing**+
+- Press the `Escape` key.
### How to draw a rectangle
-When the drawing manager is in `draw-rectangle` mode, the following actions can be done to draw points on the map, depending on the interaction mode. The generated shape will follow the [extended GeoJSON specification for rectangles](extend-geojson.md#rectangle).
+When the drawing manager is in `draw-rectangle` mode, the following actions can be done to draw points on the map, depending on the interaction mode. The generated shape follows the [extended GeoJSON specification for rectangles].
**Start drawing**+
+- Press down the left mouse button, or touch-down on the map to add the first corner of the rectangle and drag to create the rectangle.
**Finish drawing**+
+- Release the mouse button or touch point.
+- Programmatically set the drawing mode.
+- Press the `C` key.
**Cancel drawing**+
+- Press the `Escape` key.
### How to draw a circle
-When the drawing manager is in `draw-circle` mode, the following actions can be done to draw points on the map, depending on the interaction mode. The generated shape will follow the [extended GeoJSON specification for circles](extend-geojson.md#circle).
+When the drawing manager is in `draw-circle` mode, the following actions can be done to draw points on the map, depending on the interaction mode. The generated shape follows the [extended GeoJSON specification for circles].
**Start drawing**+
+- Press down the left mouse button, or touch-down on the map to add the center of the circle and drag give the circles a radius.
**Finish drawing**+
+- Release the mouse button or touch point.
+- Programmatically set the drawing mode.
+- Press the `C` key.
**Cancel drawing**+
+- Press the `Escape` key.
## Keyboard shortcuts
The drawing tools support keyboard shortcuts. These keyboard shortcuts are funct
| Key | Action | |-|--|
-| `C` | Completes any drawing that is in progress and sets the drawing mode to idle. Focus will move to top-level map element. |
-| `Escape` | Cancels any drawing that is in progress and sets the drawing mode to idle. Focus will move to top-level map element. |
+| `C` | Completes any drawing that is in progress and sets the drawing mode to idle. Focus moves to top-level map element. |
+| `Escape` | Cancels any drawing that is in progress and sets the drawing mode to idle. Focus moves to top-level map element. |
| `F` | Adds a coordinate to a point, line, or polygon if the mouse is over the map. Equivalent action of clicking the map when in click or hybrid mode. This shortcut allows for more precise and faster drawings. You can use one hand to position the mouse and other to press the button without moving the mouse from the press gesture. |
-| `Delete` or `Backspace` | If shapes is selected while the edit mode, delete them. |
+| `Delete` or `Backspace` | If shapes are selected while the edit mode, delete them. |
## Next steps Learn more about the classes in the drawing tools module: > [!div class="nextstepaction"]
-> [Drawing manager](/javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager)
+> [Drawing manager]
> [!div class="nextstepaction"]
-> [Drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar)
+> [Drawing toolbar]
+
+[extended GeoJSON specification for rectangles]: extend-geojson.md#rectangle
+[extended GeoJSON specification for circles]: extend-geojson.md#circle
+[Drawing manager]: /javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager
+[Drawing toolbar]: /javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar
azure-maps Geofence Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geofence-geojson.md
The Azure Maps [GET Geofence] and [POST Geofence] API allow you to retrieve proximity of a coordinate relative to a provided geofence or set of fences. This article details how to prepare the geofence data that can be used in the Azure Maps GET and POST API.
-The data for geofence or set of geofences is represented by `Feature` Object and `FeatureCollection` Object in `GeoJSON` format, which is defined in [rfc7946]. In Addition to it:
+The data for geofence or set of geofences, represented by the `Feature` Object and `FeatureCollection` Object in `GeoJSON` format, is defined in [rfc7946]. In Addition to it:
* The GeoJSON Object type can be a `Feature` Object or a `FeatureCollection` Object. * The Geometry Object type can be a `Point`, `MultiPoint`, `LineString`, `MultiLineString`, `Polygon`, `MultiPolygon`, and `GeometryCollection`. * All feature properties should contain a `geometryId`, which is used for identifying the geofence. * Feature with `Point`, `MultiPoint`, `LineString`, `MultiLineString` must contain `radius` in properties. `radius` value is measured in meters, the `radius` value ranges from 1 to 10000.
-* Feature with `polygon` and `multipolygon` geometry type does not have a radius property.
+* Feature with `polygon` and `multipolygon` geometry type doesn't have a radius property.
* `validityTime` is an optional property that lets the user set expired time and validity time period for the geofence data. If not specified, the data never expires and is always valid.
-* The `expiredTime` is the expiration date and time of geofencing data. If the value of `userTime` in the request is later than this value, the corresponding geofence data is considered as expired data and is not queried. Upon which, the geometryId of this geofence data will be included in `expiredGeofenceGeometryId` array within the geofence response.
-* The `validityPeriod` is a list of validity time period of the geofence. If the value of `userTime` in the request falls outside of the validity period, the corresponding geofence data is considered as invalid and will not be queried. The geometryId of this geofence data is included in `invalidPeriodGeofenceGeometryId` array within geofence response. The following table shows the properties of validityPeriod element.
+* The `expiredTime` is the expiration date and time of geofencing data. If the value of `userTime` in the request is later than this value, the corresponding geofence data is considered as expired data and isn't queried. Upon which, the geometryId of this geofence data is included in `expiredGeofenceGeometryId` array within the geofence response.
+* The `validityPeriod` is a list of validity time period of the geofence. If the value of `userTime` in the request falls outside of the validity period, the corresponding geofence data is considered as invalid and isn't queried. The geometryId of this geofence data is included in `invalidPeriodGeofenceGeometryId` array within geofence response. The following table shows the properties of validityPeriod element.
| Name | Type | Required | Description | | : |:: |::| :--|
The data for geofence or set of geofences is represented by `Feature` Object and
| businessDayOnly | Boolean | false | Indicate whether the data is only valid during business days. Default value is `false`.| * All coordinate values are represented as [longitude, latitude] defined in `WGS84`.
-* For each Feature, which contains `MultiPoint`, `MultiLineString`, `MultiPolygon` , or `GeometryCollection`, the properties are applied to all the elements. for example: All the points in `MultiPoint` will use same radius to form a multiple circle geofence.
+* For each Feature, which contains `MultiPoint`, `MultiLineString`, `MultiPolygon` , or `GeometryCollection`, the properties are applied to all the elements. for example: All the points in `MultiPoint` use the same radius to form a multiple circle geofence.
* In point-circle scenario, a circle geometry can be represented using a `Point` geometry object with properties elaborated in [Extending GeoJSON geometries].
-Following is a sample request body for a geofence represented as a circle geofence geometry in `GeoJSON` using a center point and a radius. The valid period of the geofence data starts from 2018-10-22, 9AM to 5PM, repeated every day except for the weekend. `expiredTime` indicates this geofence data will be considered expired, if `userTime` in the request is later than `2019-01-01`.
+Following is a sample request body for a geofence represented as a circle geofence geometry in `GeoJSON` using a center point and a radius. The valid period of the geofence data starts from `2018-10-22`, 9AM to 5PM, repeated every day except for the weekend. `expiredTime` indicates this geofence data is considered expired, if `userTime` in the request is later than `2019-01-01`.
```json {
azure-monitor Tutorial Log Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-log-alert.md
On the **Condition** tab, the **Log query** will already be filled in. The **Mea
## Configure alert logic In the alert logic, configure the **Operator** and **Threshold value** to compare to the value returned from the measurement. An alert is created when this value is true. Select a value for **Frequency of evaluation** which defines how often the log query is run and evaluated. The cost for the alert rule increases with a lower frequency. When you select a frequency, the estimated monthly cost is displayed in addition to a preview of the query results over a time period.
-For example, if the measurement is **Table rows**, the alert logic may be **Great than 0** indicating that at least one record was returned. If the measurement is a columns value, then the logic may need to be greater than or less than a particular threshold value. In the example below, the log query is looking for anonymous requests to a storage account. If an anonymous request has been made, then we should trigger an alert. In this case, a single row returned would trigger the alert, so the alert logic should be **Greater than 0**.
+For example, if the measurement is **Table rows**, the alert logic may be **Greater than 0** indicating that at least one record was returned. If the measurement is a columns value, then the logic may need to be greater than or less than a particular threshold value. In the example below, the log query is looking for anonymous requests to a storage account. If an anonymous request has been made, then we should trigger an alert. In this case, a single row returned would trigger the alert, so the alert logic should be **Greater than 0**.
:::image type="content" source="media/tutorial-log-alert/alert-rule-alert-logic.png" lightbox="media/tutorial-log-alert/alert-rule-alert-logic.png"alt-text="Alert logic":::
Click **Create alert rule** to create the alert rule.
Now that you've learned how to create a log query alert for an Azure resource, have a look at workbooks for creating interactive visualizations of monitoring data. > [!div class="nextstepaction"]
-> [Azure Monitor Workbooks](../visualize/workbooks-overview.md)
+> [Azure Monitor Workbooks](../visualize/workbooks-overview.md)
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
Use any of the following methods to install the Azure Monitor agent on your AKS
- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in the Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. - The aks-preview extension must be installed by using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).-- The aks-preview version 0.5.122 or higher is required for this feature. Check the aks-preview version by using the `az version` command.
+- The aks-preview version 0.5.136 or higher is required for this feature. Check the aks-preview version by using the `az version` command.
#### Install the metrics add-on
azure-netapp-files Configure Application Volume Group Sap Hana Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md
na Previously updated : 08/31/2022 Last updated : 04/09/2023 # Configure application volume groups for the SAP HANA REST API
-Application volume group (AVG) enables you to deploy all volumes for a single HANA host in one atomic step. The Azure portal and the Azure Resource Manager template have implemented pre-checks and recommendations for deployment in areas including throughputs and volume naming conventions. As a REST API user, those checks and recommendations are not available.
+Application volume groups (AVG) enable you to deploy all volumes for a single HANA host in one atomic step. The Azure portal and the Azure Resource Manager template have implemented prechecks and recommendations for deployment in areas including throughputs and volume naming conventions. As a REST API user, those checks and recommendations are not available.
Without these checks, it's important to understand the requirements for running HANA on Azure NetApp Files and the basic architecture and workflows application volume groups on which are built.
Using application volume groups requires understanding the rules and restriction
* For data, log and shared volumes, SAP HANA certification requires NFSv4.1 protocol. * Log-backup and file-backup volumes, if created optionally with the volume group of the first HANA host, may use NFSv4.1 or NFSv3 protocol. * Each volume must have at least one export policy defined. To install SAP, root access must be enabled.
-* Kerberos nor LDAP enablement are not supported.
+* Kerberos and LDAP enablement are not supported.
* You should follow the naming convention outlined in the following table. The following list describes all the possible volume types for application volume groups for SAP HANA.
The following list describes all the possible volume types for application volum
## Prepare your environment
-1. **Networking:** You need to decide on the networking architecture. To use Azure NetApp Files, a VNet needs to be created and within the vNet a delegated subnet where the ANF storage endpoints (IPs) will be placed. To ensure that the size of this subnet is large enough, see [Considerations about delegating a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
+1. **Networking:** You need to decide on the networking architecture. To use Azure NetApp Files, you need to create create a VNet that will host a delegated subnet for the Azure NetApp Files storage endpoints (IPs). To ensure that the size of this subnet is large enough, see [Considerations about delegating a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
1. Create a VNet.
- 2. Create a virtual machine (VM) subnet and delegated subnet for ANF.
+ 2. Create a virtual machine (VM) subnet and delegated subnet for Azure NetApp Files.
1. **Storage Account and Capacity Pool:** A storage account is the entry point to consume Azure NetApp Files. At least one storage account needs to be created. Within a storage account, a capacity pool is the logical unit to create volumes. Application volume groups require a capacity pool with a manual QoS. It should be created with a size and service level that meets your HANA requirements. >[!NOTE] > A capacity pool can be resized at any time. For more information about changing a capacity pool, refer to [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md). 1. Create a NetApp storage account. 2. Create a manual QoS capacity pool.
-1. **Create AvSet and proximity placement group (PPG):** For production landscapes, you should create an AvSet that is manually pinned to a data center where Azure NetApp Files resources are available in proximity. The AvSet pinning ensures that VMs will not be moved on restart. The proximity placement group (PPG) needs to be assigned to the AvSet. With the help of application volume groups, the PPG can find the closest Azure NetApp Files hardware. For more information, see [Best practices about proximity placement groups](application-volume-group-considerations.md#best-practices-about-proximity-placement-groups).
+1. **Create AvSet and proximity placement group (PPG):** For production landscapes, you should create an AvSet that is manually pinned to a data center where Azure NetApp Files resources are available in proximity. The AvSet pinning ensures that VMs won't be moved on restart. The proximity placement group (PPG) needs to be assigned to the AvSet. With the help of application volume groups, the PPG can find the closest Azure NetApp Files hardware. For more information, see [Best practices about proximity placement groups](application-volume-group-considerations.md#best-practices-about-proximity-placement-groups).
1. Create AvSet. 2. Create PPG. 3. Assign PPG to AvSet.
-1. **Manual Steps - Request AvSet pinning**: AvSet pinning is required for long term SAP HANA systems. The Microsoft capacity planning team ensures that the required VMs for SAP HANA and Azure NetApp Files resources be in proximity to the VMs that are available. VMs will not move on restart.
+1. **Manual Steps - Request AvSet pinning**: AvSet pinning is required for long term SAP HANA systems. The Microsoft capacity planning team ensures that the required VMs for SAP HANA and Azure NetApp Files resources are in proximity to available VMs. VMs will not move on restart.
* Request pinning using [this form](https://aka.ms/HANAPINNING). 1. **Create and start HANA DB VM:** Before you can create volumes using application volume groups, the PPG must be anchored. At least one VM must be created using the pinned AvSet. Once this VM is started, the PPG can be used to detect where the VM is running. 1. Create and start the VM using the AvSet.
The following table describes the request body parameters and group level proper
| `applicationType` | Application type | Must be "SAP-HANA" | | `applicationIdentifier` | Application specific identifier string, following application naming rules | The SAP System ID, which should follow aforementioned naming rules, for example `SH9` | | `deploymentSpecId` | Deployment specification identifier defining the rules to deploy the specific application volume group type | Must be: ΓÇ£20542149-bfca-5618-1879-9863dc6767f1ΓÇ¥ |
-| `volumes` | Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes)</li><li>**Required**: _data_, _log_ and _shared_. **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-Host (two volumes)
-Required: _data_ and _log_.</li><ul> |
+| `volumes` | Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes) <br /> **Required**: _data_, _log_ and _shared_ <br /> **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-host (two volumes) <br /> **Required**: _data_ and _log_ </li></ul> |
This table describes the request body parameters and volume properties for creating a volume in a SAP HANA application volume group. | Volume-level request parameter | Description | Restrictions for SAP HANA | | - | -- | -- |
-| `name` | Volume name | None. Examples or recommended volume names: <ul><li> `SH9-data-mnt00001` data for Single-Host.</li><li> `SH9-log-backup` log-backup for Single-Host.</li><li> `HSR-SH9-shared` shared for HSR Secondary.</li><li> `DR-SH9-data-backup` data-backup for CRR destination </li><li> `DR2-SH9-data-backup` data-backup for CRR destination of HSR Secondary.</li></ul> |
+| `name` | Volume name | None. Examples or recommended volume names: <ul><li> `SH9-data-mnt00001` data for Single-Host.</li><li> `SH9-log-backup` log-backup for Single-Host.</li><li> `HSR-SH9-shared` shared for HSR Secondary.</li><li> `DR-SH9-data-backup` data-backup for cross-region replication destination </li><li> `DR2-SH9-data-backup` data-backup for cross-region replication destination of HSR Secondary.</li></ul> |
| `tags` | Volume tags | None, however, it may be helpful to add a tag to the HSR partner volume to identify the corresponding HSR partner volume. The Azure portal suggests the following tag for the HSR Secondary volumes: <ul><li> **Name**: `HSRPartnerStorageResourceId` </li><li> **Value:** `<Partner volume Id>` </li></ul> | | **Volume properties** | **Description** | **SAP HANA Value Restrictions** |
-| `creationToken` | Export path name, typically same as name above. | None. Example: `SH9-data-mnt00001` |
+| `creationToken` | Export path name, typically same as the volume name. | None. Example: `SH9-data-mnt00001` |
| `throughputMibps` | QoS throughput | This must be between 1 Mbps and 4500 Mbps. You should set throughput based on volume type. |
-| `usageThreshhold` | Size of the volume in bytes. This must be in the 100 GiB to 100 TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
+| `usageThreshhold` | Size of the volume in bytes. This must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for SAP HANA. Only the following rules values can be modified for SAP HANA, the rest _must_ have their default values: <ul><li>`unixReadOnly`: should be false</li><li>`unixReadWrite`: should be true</li><li>`allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions.</li><li>`hasRootAccess`: must be true to install SAP.</li><li>`chownMode`: Specify `chown` mode.</li><li>`nfsv41`: true for data, log, and shared volumes, optionally true for data backup and log backup volumes</li><li>`nfsv3`: optionally true for data backup and log backup volumes</li><ul> All other rule values _must_ be left defaulted. | | `volumeSpecName` | Specifies the type of volume for the application volume group being created | SAP HANA volumes must have a value that is one of the following: <ul><li>"data"</li><li>"log"</li><li>"shared"</li><li>"data-backup"</li><li>"log-backup"</li></ul> | | `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. | <ul><li>The ΓÇ£dataΓÇ¥, ΓÇ£logΓÇ¥ and ΓÇ£sharedΓÇ¥ volumes must each have a PPG specified, preferably a common PPG.</li><li>A PPG must be specified for the ΓÇ£data-backupΓÇ¥ and ΓÇ£log-backupΓÇ¥ volumes, but it will be ignored during placement.</li></ul> |
-| `subnetId` | Delegated subnet ID for Azure NetApp Files. | In a normal case where there are sufficient resources available, the number of IP addresses required in the subnet depends on the order of the application volume group created in the subscription: <ol><li> First application volume group created: the creation usually requires to 3-4 IP addresses but can require up to 5</li><li> Second application volume group created: Normally requires two IP addresses</li><li></li>Third and subsequent application volume group created: Normally, more IP addresses will not be required</ol> |
+| `subnetId` | Delegated subnet ID for Azure NetApp Files. | In a normal case where there are sufficient resources available, the number of IP addresses required in the subnet depends on the order of the application volume group created in the subscription: <ol><li> First application volume group created: the creation usually requires to 3-4 IP addresses but can require up to 5</li><li> Second application volume group created: Normally requires two IP addresses</li><li></li>Third and subsequent application volume group created: Normally, more IP addresses are not required</ol> |
| `capacityPoolResourceId` | ID of the capacity pool | The capacity pool must be of type manual QoS. Generally, all SAP volumes are placed in a common capacity pool, however this is not a requirement. | | `protocolTypes` | Protocol to use | This should be either NFSv3 or NFSv4.1 and should match the protocol specified in the Export Policy Rule described earlier in this table. |
This table describes the request body parameters and volume properties for creat
The examples in this section illustrate the values passed in the volume group creation request for various SAP HANA configurations. The examples demonstrate best practices for naming, sizing, and values as described in the tables.
-In the examples below, selected placeholders are specified and should be replaced by the desired values, these include:
+In the following examples, selected placeholders are specified. You should replace them with the values specific to your configuration. These values include:
1. `<SubscriptionId>`: Subscription ID. Example: `11111111-2222-3333-4444-555555555555` 2. `<ResourceGroup>`: Resource group. Example: `TestResourceGroup` 3. `<NtapAccount>`: NetApp account, for example: `TestAccount`
In the examples below, selected placeholders are specified and should be replace
SAP HANA volume groups for the following examples can be created using a sample shell script that calls the API using curl:
-1. Extract the subscription ID. This will automate the extraction of the subscription ID and generate the authorization token:
+1. Extract the subscription ID. This automates the extraction of the subscription ID and generate the authorization token:
```bash subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r) echo "Subscription ID: $subId"
To create the five volumes (data, log, shared, data-backup, log-backup) for a si
>[!NOTE] >You need to replace the placeholders and adapt the parameters to meet your requirements.
-#### Example single-host SAP HANA application volume group creation Request
+#### Example single-host SAP HANA application volume group creation request
-This example pertains to data, log, shared, data-backup, and log-backup volumes demonstrating best practices for naming, sizing, and throughputs. This example will serve as the primary volume if you're configuring an HSR pair.
+This example pertains to data, log, shared, data-backup, and log-backup volumes demonstrating best practices for naming, sizing, and throughputs. This example serves as the primary volume if you're configuring an HSR pair.
1. Save the JSON template as `sh9.json`: ```json
This example encompasses the creation of data, log, shared, data-backup, and log
} ```
-### Example 4: Deploy volumes for a secondary HANA system using HANA system replication
+### Example 4: Deploy volumes for a disaster recovery HANA system using cross-region replication
-Cross-region replication is one way to set up a disaster recovery configuration for HANA, where the volumes of the HANA database in the DR-region are replicated on the storage side using cross-region replication in contrast to HSR, which replicates at the application level where it requires to have the HANA VMs deployed and running. Refer to the documentation (link) to understand which volumes require CRR replication. Refer to [Add volumes for an SAP HANA system as a DR system using cross-region replication](application-volume-group-disaster-recovery.md) to understand for which volumes in cross-region replication relations are required (data, shared, log-backup), not allowed (log), or optional (data-backup).
+Cross-region replication is one way to set up a disaster recovery configuration for HANA, where the volumes of the HANA database in the DR-region are replicated on the storage side using cross-region replication in contrast to HSR, which replicates at the application level where it requires to have the HANA VMs deployed and running. Refer to [Add volumes for an SAP HANA system as a DR system using cross-region replication](application-volume-group-disaster-recovery.md) to understand for which volumes in cross-region replication relations are required (data, shared, log-backup), not allowed (log), or optional (data-backup).
In this example, the following placeholders are specified and should be replaced by values specific to your configuration: 1. `<CapacityPoolResourceId3>`: DR capacity pool resource ID, for example: `/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/myRG/providers/Microsoft.NetApp/netAppAccounts/account1/capacityPools/DR_SH9_HSR_Pool` 2. `<ProximityPlacementGroupResourceId3>`: DR proximity placement group, for example:`/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/test/providers/Microsoft.Compute/proximityPlacementGroups/DR_SH9_PPG`
-3. `<SrcVolumeId_data>`, `<SrcVolumeId_shared>`, `<SrcVolumeId_data-backup>`, `<SrcVolumeId_log-backup>`: cross-region replication source volume IDs for the data, log, shared, and log-backup cross-region replication destination volumes.
+3. `<SrcVolumeId_data>`, `<SrcVolumeId_shared>`, `<SrcVolumeId_data-backup>`, `<SrcVolumeId_log-backup>`: cross-region replication source volume IDs for the data, shared, and log-backup cross-region replication destination volumes.
```json {
azure-resource-manager Bicep Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-array.md
The output from the preceding example with the default values is:
### Quickstart examples
-The following example is extracted from a quickstart template, [SQL Server VM with performance optimized storage settings
-](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.attestation/attestation-provider-create/main.bicep):
+The following example is extracted from a quickstart template, [Virtual Network with diagnostic logs settings](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/vnet-create-with-diagnostic-logs/main.bicep):
```bicep @description('Array containing DNS Servers')
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
Title: Delete resource group and resources description: Describes how to delete resource groups and resources. It describes how Azure Resource Manager orders the deletion of resources when a deleting a resource group. It describes the response codes and how Resource Manager handles them to determine if the deletion succeeded. Previously updated : 10/13/2022 Last updated : 04/10/2023
This article shows how to delete resource groups and resources. It describes how Azure Resource Manager orders the deletion of resources when you delete a resource group. + ## How order of deletion is determined When you delete a resource group, Resource Manager determines the order to delete resources. It uses the following order:
az group delete --name ExampleResourceGroup
1. To confirm the deletion, type the name of the resource group
+# [Python](#tab/azure-python)
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+rg_result = resource_client.resource_groups.begin_delete("exampleGroup")
+```
+ ## Delete resource
az resource delete \
1. When prompted, confirm the deletion.
+# [Python](#tab/azure-python)
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_client.resources.begin_delete_by_id(
+ "/subscriptions/{}/resourceGroups/{}/providers/{}/{}".format(
+ subscription_id,
+ "exampleGroup",
+ "Microsoft.Compute",
+ "virtualMachines/exampleVM"
+ ),
+ "2022-11-01"
+)
+```
+ ## Required access and deletion failures
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Microsoft will regularly apply important updates to the Azure VMware Solution fo
All new Azure VMware Solution private clouds are being deployed with VMware NSX-T Data Center version 3.2.2. NSX-T Data Center versions in existing private clouds will be upgraded to NSX-T Data Center version 3.2.2 through April 2023.
-VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. VMware HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html) like, Replicated Assisted vMotion (RAV), and Mobility Optimized Networking (MON). VMware HCX Enterprise is now automatically installed for all new VMware HCX add-on requests, and existing VMware HCX Advanced customers can upgrade to VMware HCX Enterprise using the Azure portal. Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md).
+**HCX Enterprise Edition - Default**
+
+VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. VMware HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html), like Replicated Assisted vMotion (RAV) and Mobility Optimized Networking (MON). VMware HCX Enterprise is now automatically installed for all new VMware HCX add-on requests, and existing VMware HCX Advanced customers can upgrade to VMware HCX Enterprise using the Azure portal. Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md).
**Log analytics - monitor Azure VMware Solution**
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
If you downloaded the software package to a different server, copy the files to
1. After the installation step finishes, select **Close**.
-### Install Update Rollup 2
-
-Installing the Update Rollup 2 for Azure Backup Server v3 is mandatory before you can protect the workloads. You can find the bug fixes and installation instructions in the [knowledge base article](https://support.microsoft.com/help/5004579/).
+### Install Update Rollup 2 for Microsoft Azure Backup Server (MABS) version 3
+Installing the Update Rollup 2 for Microsoft Azure Backup Server (MABS) version 3 is mandatory for protecting the workloads. You can find the bug fixes and installation instructions in the [knowledge base article](https://support.microsoft.com/help/5004579/).
## Add storage to Azure Backup Server Azure Backup Server v3 supports Modern Backup Storage that offers:
Now that you've covered how to set up Azure Backup Server for Azure VMware Solut
- [Configuring backups for your Azure VMware Solution VMs](backup-azure-vmware-solution-virtual-machines.md). - [Protecting your Azure VMware Solution VMs with Microsoft Defender for Cloud integration](azure-security-integration.md).++
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Adding a disk to a protected VM | Supported.
Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.
-<a name="ultra-disk-backup">Ultra SSD disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks.
+<a name="ultra-disk-backup">Ultra SSD disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks.
[Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS.
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
The following table describes the network topologies supported by each network f
|Connectivity to BareMetal (BM) in a local VNet| Yes | |Connectivity to BM in a peered VNet (Same region)|Yes | |Connectivity to BM in a peered VNet (Cross region or global peering)|No |
-|On-premises Connectivity to Delegated Subnet via Global and Local Expressroute |Yes|
+|On-premises connectivity to Delegated Subnet via Global and Local Expressroute |Yes|
|ExpressRoute (ER) FastPath |No | |Connectivity from on-premises to a BM in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes |
-|On-premises Connectivity to Delegated Subnet via VPN GW| Yes |
+|On-premises connectivity to Delegated Subnet via VPN GW| Yes |
|Connectivity from on-premises to a BM in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes | |Connectivity over Active/Passive VPN gateways| Yes | |Connectivity over Active/Active VPN gateways| No | |Connectivity over Active/Active Zone Redundant gateways| No | |Transit connectivity via vWAN for Spoke Delegated VNETS| No | |On-premises connectivity to Delegated subnet via vWAN attached SD-WAN| No|
-|On-premises Connectivity via Secured HUB(Az Firewall NVA) | No|
+|On-premises connectivity via Secured HUB(Az Firewall NVA) | No|
## Constraints
cognitive-services Platform Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/platform-upgrade.md
Previously updated : 03/30/2023 Last updated : 04/10/2023
> [!CAUTION] >
-> On June 02, 2023, Microsoft will retire the Custom Translator v1.0 model platform. Existing v1.0 models must migrate to the v2.0 platform for continued processing and support.
+> On June 02, 2023, Microsoft will retire the Custom Translator v1.0 model platform. Existing v1.0 models must migrate to the new platform for continued processing and support.
-Following measured and consistent high-quality results using models trained on the Custom Translator v2.0 platform, the v1.0 platform is retiring. Custom Translator v2.0 delivers significant improvements in many domains compared to both standard and Custom v1.0 platform translations. Migrate your v1.0 models to the v2.0 platform by June 02, 2023.
+Following measured and consistent high-quality results using models trained on the Custom Translator new platform, the v1.0 platform is retiring. The new Custom Translator platform delivers significant improvements in many domains compared to both standard and Custom v1.0 platform translations. Migrate your v1.0 models to the new platform by June 02, 2023.
## Custom Translator v1.0 upgrade timeline * **May 01, 2023** → Custom Translator v1.0 model publishing ends. There's no downtime during the v1.0 model migration. All model publishing and in-flight translation requests will continue without disruption until June 02, 2023.
-* **May 01, 2023 through June 02, 2023** → Customers voluntarily migrate to v2.0 models.
+* **May 01, 2023 through June 02, 2023** → Customers voluntarily migrate to new platform models.
* **June 08, 2023** → Remaining v1.0 published models migrate automatically and are published by the Custom Translator team.
-## Upgrade to v2.0
+## Upgrade to new platform
> [!IMPORTANT] > > * Starting **May 01, 2023** the upgrade wizard and workspace banner will be displayed in the Custom Translator portal indicating that you have v1.0 models to upgrade. > * The banner contains a **Select** button that takes you to the upgrade wizard where a list of all your v1.0 models available for upgrade are displayed.
-> * Select any or all of your v1.0 models then select **Train** to start v2.0 model upgrade training.
+> * Select any or all of your v1.0 models then select **Train** to start new platform model training.
* **Check to see if you have published v1.0 models**. After signing in to the Custom Translator portal, you'll see a message indicating that you have v1.0 models to upgrade. You can also check to see if a current workspace has v1.0 models by selecting **Workspace settings** and scrolling to the bottom of the page.
-* **Use the upgrade wizard**. Follow the steps listed in **Upgrade to the latest version** wizard. Depending on your training data size, it may take from a few hours to a full day to upgrade your models to the v2.0 platform.
+* **Use the upgrade wizard**. Follow the steps listed in **Upgrade to the latest version** wizard. Depending on your training data size, it may take from a few hours to a full day to upgrade your models to the new platform.
## Unpublished and opt-out published models
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/language-support.md
Use this article to learn which natural languages are supported by document and
| Language | Language code | Notes | |--|||
+| Chinese-Simplified | `zh-hans` | `zh` also accepted |
| English | `en` | |
+| French | `fr` | |
+| German | `de` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Korean | `ko` | |
+| Spanish | `es` | |
+| Portuguese | `pt` | |
# [Conversation summarization (preview)](#tab/conversation-summarization)
communication-services Skip Setup Screen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/skip-setup-screen.md
+
+ Title: Skip setup screen of the UI Library
+
+description: Use Azure Communication Services UI Library for Mobile native to skip the setup screen
++++ Last updated : 03/21/2023+
+zone_pivot_groups: acs-plat-ios-android
+
+#Customer intent: As a developer, I want to skip setup screen of the library in my application
++
+# Skip setup screen
+
+The feature enables the option to join a call without passing through the setup screen. It empowers developers to build their communication application using UI Library in a way that users can join a call directly without any user interaction. The feature also provides capability to configure camera and microphone default state. We're providing APIs to turn the camera and microphone on or off so that developers have the capability to configure the default state of the camera and microphone before joining a call.
+
+Learn how to set up the skip setup screen feature correctly in your application.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)
+- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md)
+++
+## Next steps
+
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
confidential-computing Skr Flow Confidential Containers Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-containers-azure-container-instance.md
Upon error, the `key/release` POST method response carries a `StatusForbidden` h
## Custom implementation with your container application
-To perform a custom container application that extends the capability of Azure Key Vault (AKV) - Secure Key Release and Microsoft Azure Attestation (MAA), use the below as a high level reference flow. An easy approach is to review the current side-car implementation code in this [side-car Github project](https://github.com/microsoft/confidential-sidecar-containers/tree/d933d0f4e3d5498f7ed9137189ab6a23ade15466/pkg/common).
+To perform a custom container application that extends the capability of Azure Key Vault (AKV) - Secure Key Release and Microsoft Azure Attestation (MAA), use the below as a high level reference flow. An easy approach is to review the current side-car implementation code in this [side-car GitHub project](https://github.com/microsoft/confidential-sidecar-containers/tree/d933d0f4e3d5498f7ed9137189ab6a23ade15466/pkg/common).
![Image of the aforementioned operations, which you should be performing.](media/skr-flow-azure-container-instance-sev-snp-attestation/skr-flow-custom-container.png)
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-striim.md
In this section, you will configure the Azure Cosmos DB for Apache Cassandra acc
1. From the same terminal window, restart the Striim server by executing the following commands: ```bash
- Systemctl stop striim-node
- Systemctl stop striim-dbms
- Systemctl start striim-dbms
- Systemctl start striim-node
+ systemctl stop striim-node
+ systemctl stop striim-dbms
+ systemctl start striim-dbms
+ systemctl start striim-node
``` 1. Striim will take a minute to start up. If youΓÇÖd like to see the status, run the following command:
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-data-striim.md
In this section, you will configure the Azure Cosmos DB for NoSQL account as the
1. From the same terminal window, restart the Striim server by executing the following commands: ```bash
- Systemctl stop striim-node
- Systemctl stop striim-dbms
- Systemctl start striim-dbms
- Systemctl start striim-node
+ systemctl stop striim-node
+ systemctl stop striim-dbms
+ systemctl start striim-dbms
+ systemctl start striim-node
``` 1. Striim will take a minute to start up. If youΓÇÖd like to see the status, run the following command:
cosmos-db Concepts Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-customer-managed-keys.md
+
+ Title: Concepts of customer managed keys in Azure Cosmos DB for PostgreSQL.
+description: Concepts of customer managed keys.
+++++ Last updated : 04/06/2023+
+# Customer-managed keys in Azure Cosmos DB for PostgreSQL
++
+Data stored in your Azure Cosmos DB for PostgreSQL cluster is automatically and seamlessly encrypted with keys managed by Microsoft. These keys are referred to as **service-managed keys**. Azure Cosmos DB for PostgreSQL uses [Azure Storage encryption](../../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using service-managed keys. You can optionally choose to add an extra layer of security by enabling encryption with **customer-managed keys**.
+++
+## Service-managed keys
+
+The Azure Cosmos DB for PostgreSQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. All Data including backups and temporary files created while running queries are encrypted on disk. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system-managed. Storage encryption is always on and cannot be disabled.
+
+## Customer-managed keys
+
+Many organizations require full control of access to data using a customer-managed key. Data encryption with customer-managed keys for Azure Cosmos DB for PostgreSQL enables you to bring your own key for protecting data at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in full control of, a key's lifecycle, usage permissions, and auditing of operations.
+
+Data encryption with customer-managed keys for Azure Cosmos DB for PostgreSQL is set at the server level. Data, including backups, are encrypted on disk. This encryption includes the temporary files created while running queries. For a given cluster, a customer-managed key, called the key encryption key (**KEK**), is used to encrypt the service's data encryption key (**DEK**). The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../../key-vault/index.yml) instance.
+
+| | Description |
+| | |
+| **Data encryption key (DEK)** | A data encryption key is a symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. The resource provider or application instance that is encrypting and decrypting a specific block requires access to DEKs. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key. |
+| **Key encryption key (KEK)** | A key encryption key is an encryption key used to encrypt the DEKs. A KEK that never leaves a key vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point and deletion of the KEK effectively deletes the DEKs. |
+
+> [!NOTE]
+> Azure Key Vault is a cloud-based key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (**HSM**s). A key vault doesn't allow direct access to a stored key but provides encryption and decryption services to authorized entities. A key vault can generate the key, import it, or have it transferred from an on-premises HSM device.
+
+The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest.](../../security/fundamentals/encryption-atrest.md).
+
+## How data encryption with a customer-managed key works
+
+For a cluster to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server:
+
+| | Description |
+| | |
+| **get** | Enables retrieving the public part and properties of the key in the key vault. |
+| **wrapKey** | Enables encryption of the DEK. The encrypted DEK is stored in Azure Cosmos DB for PostgreSQL. |
+| **unwrapKey** | Enables decryption of the DEK. Azure Cosmos DB for PostgreSQL requires the decrypted DEK to encrypt/decrypt data. |
+
+The key vault administrator can also enable logging of Key Vault audit events, so they can be audited later.
+When the Azure Cosmos DB for PostgreSQL cluster is configured to use the customer-managed key stored in the key vault, the cluster sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use [Azure Monitor](../../azure-monitor/index.yml) to review Key Vault audit event logs, if logging is enabled.
+
+[ ![Screenshot of architecture of Data Enrcryption with Customer Managed Keys.](media/concepts-customer-managed-keys/architecture-customer-managed-keys.png)](media/concepts-customer-managed-keys/architecture-customer-managed-keys.png#lightbox)
+
+## Benefits
+
+Data encryption with customer-managed keys for Azure Cosmos DB for PostgreSQL provides the following benefits:
+
+- You fully control data access with the ability to remove the key and make the database inaccessible.
+- Full control over the key lifecycle, including rotation of the key to align with specific corporate policies.
+- Central management and organization of keys in Azure Key Vault.
+- Ability to implement separation of duties between security officers, database administrators, and system administrators.
+- Enabling encryption doesn't have any extra performance effect with or without customer-managed keys. Azure Cosmos DB for PostgreSQL relies on Azure Storage for data encryption in both customer-managed and service-managed key scenarios.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+>[Enable encryption with customer managed keys](how-to-customer-managed-keys.md)
cosmos-db How To Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-customer-managed-keys.md
+
+ Title: How to enable encryption with customer managed keys in Azure Cosmos DB for PostgreSQL.
+description: How to enable data encryption with customer managed keys.
+++++ Last updated : 04/06/2023+
+# Enable data encryption with customer-managed keys in Azure Cosmos DB for PostgreSQL
++
+## Prerequisites
+
+- An existing Azure Cosmos DB for PostgreSQL account.
+ - If you have an Azure subscription, [create a new account](../nosql/how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+
+## Enable data encryption with customer-managed keys
+
+> [!IMPORTANT]
+> Create all the following resources in the same region where your Azure Cosmos DB for PostgreSQL cluster will be deployed.
+
+1. Create a User-Assigned Managed Identity. Currently, Azure Cosmos DB for PostgreSQL only supports user-assigned managed identities.
+
+1. Create an Azure Key Vault and add an access policy to the created User-Assigned Managed Identity with the following key permissions: Get, Unwrap Key, and Wrap Key.
+
+1. Generate a Key in the Key Vault (supported key types: RSA 2048, 3071, 4096).
+
+1. Select the Customer-Managed Key encryption option during the creation of the Azure Cosmos DB for PostgreSQL cluster and select the appropriate User-Assigned Managed Identity, Key Vault, and Key created in Steps 1, 2, and 3.
+
+## Detailed steps
+
+### User Assigned Managed Identity
+
+1. Search for Managed Identities in the global search bar.
+
+ ![Screenshot of Managed Identities in Azure portal.](media/how-to-customer-managed-keys/user-assigned-managed-identity.png)
++
+1. Create a new User assigned managed Identity in the same region as your Azure Cosmos DB for PostgreSQL cluster.
+
+ ![Screenshot of User assigned managed Identity page in Azure portal.](media/how-to-customer-managed-keys/user-assigned-managed-identity-provisioning.png)
++
+Learn more about [User Assigned Managed Identity.](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+
+### Key Vault
+
+Using customer-managed keys with Azure Cosmos DB for PostgreSQL requires you to set two properties on the Azure Key Vault instance that you plan to use to host your encryption keys: Soft Delete and Purge Protection.
+
+1. If you create a new Azure Key Vault instance, enable these properties during creation:
+
+ [ ![Screenshot of Key Vault's properties.](media/how-to-customer-managed-keys/key-vault-soft-delete.png) ](media/how-to-customer-managed-keys/key-vault-soft-delete.png#lightbox)
+
+1. If you're using an existing Azure Key Vault instance, you can verify that these properties are enabled by looking at the Properties section on the Azure portal. If any of these properties arenΓÇÖt enabled, see the "Enabling soft delete" and "Enabling Purge Protection" sections in one of the following articles.
+
+ * How to use [soft-delete with PowerShell.](../../key-vault/general/key-vault-recovery.md)
+ * How to use [soft-delete with Azure CLI.](../../key-vault/general/key-vault-recovery.md)
+
+1. The key Vault must be set with 90 days for 'Days to retain deleted vaults'. If the existing key Vault has been configured with a lower number, you'll need to create a new key vault as it can't be modified after creation.
+
+ > [!IMPORTANT]
+ > Your Azure Key Vault instance must be allow public access from all the networks.
+
+### Add an Access Policy to the Key Vault
+
+1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select Access configuration from the left menu and then select Go to access policies.
+
+ [ ![Screenshot of Key Vault's access configuration.](media/how-to-customer-managed-keys/access-policy.png) ](media/how-to-customer-managed-keys/access-policy.png#lightbox)
+
+1. Select + Create.
+
+1. In the Permissions Tab under the Key permissions drop-down menu, select Get, Unwrap Key, and Wrap Key permissions.
+
+ [ ![Screenshot of Key Vault's permissions settings.](media/how-to-customer-managed-keys/access-policy-permissions.png) ](media/how-to-customer-managed-keys/access-policy-permissions.png#lightbox)
+
+1. In the Principal Tab, select the User Assigned Managed Identity you had created in prerequisite step.
+
+1. Navigate to Review + create select Create.
+
+### Create / Import Key
+
+1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys.
+
+1. Select Keys from the left menu and then select +Generate/Import.
+
+ [ ![Screenshot of Key generation page.](media/how-to-customer-managed-keys/create-key.png) ](media/how-to-customer-managed-keys/create-key.png#lightbox)
+
+1. The customer-managed key to be used for encrypting the DEK can only be asymmetric RSA Key type. All RSA Key sizes 2048, 3072 and 4096 are supported.
+
+1. The key activation date (if set) must be a date and time in the past. The expiration date (if set) must be a future date and time.
+
+1. The key must be in the Enabled state.
+
+1. If you're importing an existing key into the key vault, make sure to provide it in the supported file formats (`.pfx`, `.byok`, `.backup`).
+
+1. If you're manually rotating the key, the old key version shouldn't be deleted for at least 24 hours.
+
+### Enable CMK encryption during the provisioning for a new cluster
+
+ # [Portal](#tab/portal)
+
+ 1. During the provisioning of a new Cosmos DB for PostgreSQL cluster, after providing the necessary information under Basics and Networking Tab, Navigate to the Encryption (Preview) Tab.
+ [ ![Screenshot of Encrytion configuration page.](media/how-to-customer-managed-keys/encryption-tab.png)](media/how-to-customer-managed-keys/encryption-tab.png#lightbox)
+
+ 1. Select Customer Managed Key under Data encryption key option.
+
+ 1. Select the User Assigned Managed Identity created in the previous section.
+
+ 1. Select the Key Vault created in the previous step, which has the access policy to the user managed identity selected in the previous step.
+
+ 1. Select the Key created in the previous step, and then select Review+create.
+
+ 1. Verify that CMK is encryption is enabled by Navigating to the Data Encryption(preview) blade of the Cosmos DB for PostgreSQL cluster in the Azure portal.
+ ![Screenshot of data encryption tab.](media/how-to-customer-managed-keys/data-encryption-tab-note.png)
+
+ > [!NOTE]
+ > Data encryption can only be configured during the creation of a new cluster and can't be updated on an existing cluster. A workaround for updating the encryption configuration on an existing cluster is to restore an existing PITR backup to a new cluster and configure the data encryption during the creation of the newly restored cluster.
+
+ # [ARM Template](#tab/arm)
+ ```json
+ {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "serverGroupName": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "administratorLoginPassword": {
+ "type": "secureString"
+ },
+ "previewFeatures": {
+ "type": "bool"
+ },
+ "postgresqlVersion": {
+ "type": "string"
+ },
+ "coordinatorVcores": {
+ "type": "int"
+ },
+ "coordinatorStorageSizeMB": {
+ "type": "int"
+ },
+ "numWorkers": {
+ "type": "int"
+ },
+ "workerVcores": {
+ "type": "int"
+ },
+ "workerStorageSizeMB": {
+ "type": "int"
+ },
+ "enableHa": {
+ "type": "bool"
+ },
+ "enablePublicIpAccess": {
+ "type": "bool"
+ },
+ "serverGroupTags": {
+ "type": "object"
+ },
+ "userAssignedIdentityUrl": {
+ "type": "string"
+ },
+ "encryptionKeyUrl": {
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "name": "[parameters('serverGroupName')]",
+ "type": "Microsoft.DBforPostgreSQL/serverGroupsv2",
+ "kind": "CosmosDBForPostgreSQL",
+ "apiVersion": "2020-10-05-privatepreview",
+ "identity":
+ {
+ "type": "UserAssigned",
+ "userAssignedIdentities":
+ {
+ "/subscriptions/04b0358b-392b-41d6-899e-b75cb292321e/resourcegroups/yogkuleuapcmkmarlintest/providers/Microsoft.ManagedIdentity/userAssignedIdentities/marlincmktesteus2euapuai1": {}
+ }
+ },
+ "location": "[parameters('location')]",
+ "tags": "[parameters('serverGroupTags')]",
+ "properties": {
+ "createMode": "Default",
+ "administratorLogin": "citus",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+ "backupRetentionDays": 35,
+ "enableMx": false,
+ "enableZfs": false,
+ "previewFeatures": "[parameters('previewFeatures')]",
+ "postgresqlVersion": "[parameters('postgresqlVersion')]",
+ "dataencryption":
+ {
+ "primaryKeyUri": "[parameters('encryptionKeyUrl')]",
+ "primaryUserAssignedIdentityId": "[parameters('userAssignedIdentityUrl')]",
+ "type": "AzureKeyVault"
+ },
+ "serverRoleGroups": [
+ {
+ "name": "",
+ "role": "Coordinator",
+ "serverCount": 1,
+ "serverEdition": "GeneralPurpose",
+ "vCores": "[parameters('coordinatorVcores')]",
+ "storageQuotaInMb": "[parameters('coordinatorStorageSizeMB')]",
+ "enableHa": "[parameters('enableHa')]"
+ },
+ {
+ "name": "",
+ "role": "Worker",
+ "serverCount": "[parameters('numWorkers')]",
+ "serverEdition": "MemoryOptimized",
+ "vCores": "[parameters('workerVcores')]",
+ "storageQuotaInMb": "[parameters('workerStorageSizeMB')]",
+ "enableHa": "[parameters('enableHa')]",
+ "enablePublicIpAccess": "[parameters('enablePublicIpAccess')]"
+ }
+ ]
+ },
+ "dependsOn": []
+ }
+ ],
+ "outputs": {}
+ }
+ ```
++
+### High availability
+
+ When CMK encryption is enabled on the primary cluster, all standby HA replicas are automatically encrypted by the primary clusterΓÇÖs CMK
+
+### Restrictions
+
+* CMK encryption can't be enabled on cross region read replicas.
+
+* CMK encryption can only be enabled during the creation of a new Azure Cosmos DB for PostgreSQL cluster.
+
+* CMK encryption isn't supported with Private access (including VNET).
+
+### Changing encryption configuration by performing a PITR
+
+Encryption configuration can be changed from service managed encryption to CMK encryption or vice versa while performing a Point in restore operation to a new cluster.
+
+# [Portal](#tab/portal)
+
+ 1. Navigate to the Data Encryption blade, and select Initiate restore operation. Alternatively, you can perform PITR by selecting the Restore option in the overview blade.
+ [ ![Screenshot of PITR.](media/how-to-customer-managed-keys/point-in-time-restore.png)](media/how-to-customer-managed-keys/point-in-time-restore.png#lightbox)
+
+ 1. You can change/configure the Data Encryption from the Encryption(preview) Tab.
+
+# [ARM Template](#tab/arm)
+
+ ```json
+ {
+ "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string"
+ },
+ "sourceLocation": {
+ "type": "string"
+ },
+ "subscriptionId": {
+ "type": "string"
+ },
+ "resourceGroupName": {
+ "type": "string"
+ },
+ "serverGroupName": {
+ "type": "string"
+ },
+ "sourceServerGroupName": {
+ "type": "string"
+ },
+ "restorePointInTime": {
+ "type": "string"
+ },
+ "encryptionKeyUrl": {
+ "type": "string"
+ },
+ "userAssignedIdentityUrl": {
+ "type": "string"
+ }
+ },
+ "variables": {
+ "api": "2020-02-14-privatepreview"
+ },
+ "resources": [
+ {
+ "apiVersion": "2020-10-05-privatepreview",
+ "location": "[parameters('location')]",
+ "name": "[parameters('serverGroupName')]",
+ "identity":
+ {
+ "type": "UserAssigned",
+ "userAssignedIdentities":
+ {
+ "/subscriptions/04b0358b-392b-41d6-899e-b75cb292321e/resourcegroups/yogkuleuapcmkmarlintest/providers/Microsoft.ManagedIdentity/userAssignedIdentities/marlincmktesteus2euapuai1": {}
+ }
+ },
+ "properties": {
+ "createMode": "PointInTimeRestore",
+ "sourceServerGroupName": "[parameters('sourceServerGroupName')]",
+ "pointInTimeUTC": "[parameters('restorePointInTime')]",
+ "sourceLocation": "[parameters('location')]",
+ "sourceSubscriptionId": "[parameters('subscriptionId')]",
+ "sourceResourceGroupName": "[parameters('resourceGroupName')]",
+ "enableMx": false,
+ "enableZfs": false,
+ "dataencryption":
+ {
+ "primaryKeyUri": "[parameters('encryptionKeyUrl')]",
+ "primaryUserAssignedIdentityId": "[parameters('userAssignedIdentityUrl')]",
+ "type": "AzureKeyVault"
+ },
+ }
+ "type": "Microsoft.DBforPostgreSQL/serverGroupsv2"
+ }
+ ]
+ }
+ ```
++
+### Monitor the customer-managed key in Key Vault
+
+To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features:
+
+* [Azure Resource Health](../../service-health/resource-health-overview.md): An inaccessible database that has lost access to the Customer Key shows as "Inaccessible" after the first connection to the database has been denied.
+
+* [Activity log](../../service-health/alerts-activity-log-service-notifications-portal.md): When access to the Customer Key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible, if you create alerts for these events.
+
+* [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preference.
+++
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters.
+### April 2023
+
+* Public preview: Data Encryption at rest using [Customer Managed Keys](./concepts-customer-managed-keys.md) is now supported for all available regions.
+ * See [this guide](./how-to-customer-managed-keys.md) for the steps to enable data encryption using customer managed keys.
+ ### March 2023 * General availability: Cluster compute [start / stop functionality](./concepts-compute-start-stop.md) is now supported across all configurations.
might have constrained capabilities. For more information, see
[Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
-There are no features currently available for preview.
+* Data encryption at rest using customer managed keys.
## Contact us
cost-management-billing Add Change Subscription Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/add-change-subscription-administrator.md
tags: billing
Previously updated : 01/26/2023 Last updated : 04/10/2023
This article describes how to add or change the administrator role for a user us
This article applies to a Microsoft Online Service Program (pay-as-you-go) account or a Visual Studio account. If you have a Microsoft Customer Agreement (Azure plan) account, see [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md). If you have an Azure Enterprise Agreement, see [Manage Azure Enterprise Agreement roles](understand-ea-roles.md).
-Microsoft recommends that you manage access to resources using Azure RBAC. However, if you are still using the classic deployment model and managing the classic resources by using [Azure Service Management PowerShell Module](/powershell/module/servicemanagement/azure.service), you'll need to use a classic administrator.
+Microsoft recommends that you manage access to resources using Azure RBAC. However, if you are still using the classic deployment model and managing the classic resources by using [Azure Service Management PowerShell Module](/powershell/azure/servicemanagement/install-azure-ps), you'll need to use a classic administrator.
> [!TIP] > If you only use the Azure portal to manage the classic resources, you don't need to use the classic administrator.
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
Title: Azure EA portal administration
description: This article explains the common tasks that an administrator accomplishes in the Azure EA portal. Previously updated : 08/08/2022 Last updated : 04/10/2023
This article explains the common tasks that an administrator accomplishes in the
## Activate your enrollment
-To activate your service, the initial enterprise administrator opens the [Azure Enterprise portal](https://ea.azure.com) and signs in using the email address from the invitation email.
+To activate your enrollment, the initial enterprise administrator signs in to the [Azure Enterprise portal](https://ea.azure.com) using their work, school, or Microsoft account.
If you've been set up as the enterprise administrator, you don't need to receive the activation email. Go to [Azure Enterprise portal](https://ea.azure.com) and sign in with your work, school, or Microsoft account email address and password.
data-factory Tutorial Pipeline Failure Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-failure-error-handling.md
To set up the pattern:
* Add generic error handling step to the end of the pipeline * Connect both UponFailure and UponSkip paths from the last activity to the error handling activity The last step, Generic Error Handling, will only run if any of the previous activities fails. It will not run if they all succeed.
+You can add multiple activities for error handling.
++++ ## Next steps [Data Factory metrics and alerts](monitor-metrics-alerts.md)
data-manager-for-agri How To Set Up Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-audit-logs.md
+
+ Title: Enable logging for Azure Data Manager for Agriculture
+description: Learn how enable logging and debugging in Azure Data Manager for Agriculture
++++ Last updated : 04/10/2023+++
+# Azure Data Manager for Agriculture logging
+
+After you create a Data Manager for Agriculture resource instance, you can monitor how and when your resources are accessed, and by whom. You can also debug reasons for failure for data-plane requests. To do this, you need to enable logging for Azure Data Manager for Agriculture. You can then save log information at a destination such as a storage account, event hub or a log analytics workspace, that you provide.
+
+This article provides you with the steps to setup logging for Azure Data Manager for Agriculture.
+
+## Enable collection of logs
+
+After creating a Data Manager for Agriculture service resource, navigate to diagnostics settings and then select `add diagnostics settings`. Follow these steps to start collecting and storing logs:
+
+1. Provide a name for the diagnostic setting.
+2. Select the categories that you want to start collecting logs for.
+3. Choose the destination for collection from storage account, event hub or a log analytics workspace.
++
+Now you can navigate to the destination you specified in the diagnostic setting to access logs. You can access your logging information 10 minutes (at most) after the Data Manager for Agriculture operation. In most cases, it's quicker.
+
+## Interpret your logs
+Each log follows the schema listed in the table. The table contains the field names and descriptions:
+
+| Field name | Description |
+| | |
+| **time** |Date and time in UTC. |
+| **resourceId** |Azure Resource Manager resource ID. For logs, this is the Data Manager for Agriculture resource ID. |
+| **operationName** |Name of the operation, as documented. |
+| **operationVersion** |REST API version requested by the client. |
+| **category** |Type of result. |
+| **resultType** |Result of the REST API request (success or failure). |
+| **resultSignature** |HTTP status. |
+| **resultDescription** |Extra description about the result, when available. |
+| **durationMs** |Time it took to service the REST API request, in milliseconds.|
+| **callerIpAddress** |IP address of the client that made the request. |
+|**level**|The severity level of the event (Informational, Warning, Error, or Critical).
+| **correlationId** |An optional GUID that can be used to correlate logs. |
+| **identity** |Identity from the token that was presented in the REST API request. This is usually an object ID and an application ID or either of the two.|
+|**location**|The region of the resource emitting the event such as "East US" |
+| **properties** |For each `operationName` this contains: `requestUri` (URI of the API request), `partyId`(partyId associated with the request, wherever applicable),`dataPlaneResourceId` (ID that uniquely identifies the data-plane resource in the request) and `requestBody` (contains the request body for the API call associated with the `operationName` for all categories other than ApplicationAuditLogs). </br> Other than the common one's mentioned before `jobProcessesLogs` category has: </br> 1. This list is of fields across operationNames: </br> `jobRunType` (can be oneTime or periodic), `jobId` (ID of the job), `initiatedBy` (indicates whether a job was triggered by a user or by the service). </br> 2. This list is of fields for failed farmOperation related jobs: </br> `farmOperationEntityId` (ID of the entity that failed to be created by the farmOperation job), `farmOperationEntityType`(type of the entity that failed to be created), `errorCode`(code for job failure), `errorMessage`(description of failure), `internalErrorCode`(failure code provide by the provider), `internalErrorMessage`(description of the failure provided by the provider), `providerId`(ID of the provider).
++
+The `categories` field for Data Manager for Agriculture can have values that are listed in the following table:
+### Categories table
+| category| Description |
+| | |
+|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Boundary, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses.
+|FarmOperationsLogs|Logs for CRUD operations for FarmOperations data ingestion job, ApplicationData, PlantingData, HarvestingData, TillageData
+|SatelliteLogs| Logs for create and get operations for Satellite data ingestion job
+|WeatherLogs|Logs for create, delete and get operations for weather data ingestion job
+|ProviderAuthLogs| Logs for create, update, delete, cascade delete, get and get all for Oauth providers. It also has logs for get, get all, cascade delete for oauth tokens.
+|JobProcessedLogs| Logs for indicating success or failure and reason of failure for jobs. In addition to logs for resource cascade delete jobs, data-ingestion jobs, it also contains logs for farm operations and event handling jobs.
+|ModelInferenceLogs| Logs for create and get operations for biomass model job.
+|InsightLogs| Logs for get and get all operations for insights.
+|ApplicationAuditLogs| Logs for privileged actions such as data-plane resource create, update, delete and subscription management operations. Complete list is in the operation name table below.
+
+The `operationName` field values are in *Microsoft.AgFoodPlatform/resource-name/read or write or delete or action* format.
+* `/write` suffix in the operation name corresponds to a create or update the resource-name
+* `/read`suffix in the operation name corresponds to a GET/ LIST /GET ALL API calls or GET status for a cascade delete job for the resource-name
+* `/delete` suffix corresponds to the deletion of the resource-name
+* `/action` suffix corresponds to POST method calls for a resource name
+* `/processed` suffix corresponds to completion of a job (a PUT method call). This indicates status of the job (success or failure).
+* `/failures` suffix corresponds to failure of a farm operation job (a PUT method call) and contains description about the reason of failure.
+
+The nomenclature for Jobs is as following:
+* For data-ingestion jobs: *Microsoft.AgFoodPlatform/ingestionJobs/<'resource-name'>DataingestionJobs/write*
+* For deletion jobs: *Microsoft.AgFoodPlatform/deletionJobs/<'resource-name'>cascadeDeleteJobs/write*
+
+The following table lists the **operationName** values and corresponding REST API commands for a category as a tab:
+
+### FarmManagementLogs
+
+| operationName |
+| |
+|Microsoft.AgFoodPlatform/farmers/write|
+|Microsoft.AgFoodPlatform/farmers/read|
+|Microsoft.AgFoodPlatform/deletionJobs/farmersCascadeDeleteJobs/write|
+|Microsoft.AgFoodPlatform/farms/write|
+|Microsoft.AgFoodPlatform/farms/read|
+|Microsoft.AgFoodPlatform/farms/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/farmsCascadeDeleteJobs/write|
+|Microsoft.AgFoodPlatform/field/write|
+|Microsoft.AgFoodPlatform/field/read|
+|Microsoft.AgFoodPlatform/field/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/fieldsCascadeDeleteJobs/write|
+|Microsoft.AgFoodPlatform/seasonalField/write|
+|Microsoft.AgFoodPlatform/seasonalField/read|
+|Microsoft.AgFoodPlatform/seasonalField/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/seasonalFieldsCascadeDeleteJobs/write|
+|Microsoft.AgFoodPlatform/boundaries/write|
+|Microsoft.AgFoodPlatform/boundaries/read|
+|Microsoft.AgFoodPlatform/boundaries/delete|
+|Microsoft.AgFoodPlatform/boundaries/action|
+|Microsoft.AgFoodPlatform/deletionJobs/fieldsCascadeDeleteJobs/write|
+|Microsoft.AgFoodPlatform/crops/write|
+|Microsoft.AgFoodPlatform/crops/read|
+|Microsoft.AgFoodPlatform/crops/delete|
+|Microsoft.AgFoodPlatform/cropVarieties/write|
+|Microsoft.AgFoodPlatform/cropVarieties/read|
+|Microsoft.AgFoodPlatform/cropVarieties/delete|
+|Microsoft.AgFoodPlatform/seasons/write|
+|Microsoft.AgFoodPlatform/seasons/read|
+|Microsoft.AgFoodPlatform/seasons/delete|
+|Microsoft.AgFoodPlatform/attachments/write|
+|Microsoft.AgFoodPlatform/attachments/read|
+|Microsoft.AgFoodPlatform/attachments/delete|
+|Microsoft.AgFoodPlatform/prescriptions/write|
+|Microsoft.AgFoodPlatform/prescriptions/read|
+|Microsoft.AgFoodPlatform/prescriptions/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/prescriptionsCascadeDeleteJobs/write|
+|Microsoft.AgFoodPlatform/prescriptionMaps/write|
+|Microsoft.AgFoodPlatform/prescriptionMaps/read|
+|Microsoft.AgFoodPlatform/prescriptionMaps/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/prescriptionMapsCascadeDeleteJobs/write|
+|Microsoft.AgFoodPlatform/managementZones/write|
+|Microsoft.AgFoodPlatform/managementZones/read|
+|Microsoft.AgFoodPlatform/managementZones/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/managementZonescascadeDeletejobs/write|
+|Microsoft.AgFoodPlatform/zones/write|
+|Microsoft.AgFoodPlatform/zones/read|
+|Microsoft.AgFoodPlatform/zones/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/zonesCascadedeleteJobs/write|
+|Microsoft.AgFoodPlatform/plantTissueanalyses/write|
+|Microsoft.AgFoodPlatform/plantTissueanalyses/read|
+|Microsoft.AgFoodPlatform/plantTissueanalyses/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/plantTissueanalysesCascadedeleteJobs/write|
+|Microsoft.AgFoodPlatform/nutrientAnalyses/write|
+|Microsoft.AgFoodPlatform/nutrientAnalyses/read|
+|Microsoft.AgFoodPlatform/nutrientAnalyses/delete|
+|Microsoft.AgFoodPlatform//deletionJobs/nutrientAnalysescascadeDeletejobs/delete|
++
+### FarmOperationLogs
+
+| operationName |
+| |
+|Microsoft.AgFoodPlatform/ingetsionJobs/farmOperationsdataIngestionjobs/write|
+|Microsoft.AgFoodPlatform/applicationData/read|
+|Microsoft.AgFoodPlatform/applicationData/write|
+|Microsoft.AgFoodPlatform/applicationData/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/applicationDatacascadeDeletejob/write|
+|Microsoft.AgFoodPlatform/plantingData/write|
+|Microsoft.AgFoodPlatform/plantingData/read|
+|Microsoft.AgFoodPlatform/plantingData/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/plantingDatacascadeDeletejob/write|
+|Microsoft.AgFoodPlatform/harvestingData/write|
+|Microsoft.AgFoodPlatform/harvestingData/read|
+|Microsoft.AgFoodPlatform/harvestingData/delete|
+|Microsoft.AgFoodPlatform/deletionJobs/harvestingDatacascadeDeletejob/write|
+|Microsoft.AgFoodPlatform/tillageData/Write|
+|Microsoft.AgFoodPlatform/tillageData/Read|
+|Microsoft.AgFoodPlatform/tillageData/Delete|
+|Microsoft.AgFoodPlatform/deletionJobs/tillageDatacascadeDeletejob/write|
+
+### SatelliteLogs
+
+| operationName |
+| |
+|Microsoft.AgFoodPlatform/ingestionJobs/satelliteDataingestionJob/write|
+|Microsoft.AgFoodPlatform/scenes/read|
++
+### WeatherLogs
+
+| operationName |
+| |
+|Microsoft.AgFoodPlatform/ingestionJobs/weatherDataingestionJob/write|
+|Microsoft.AgFoodPlatform/weather/read|
+|Microsoft.AgFoodPlatform/deletionJobs/weatherDeletejob/delete|
+
+### ProviderAuthLogs
+
+| operationName|
+| |
+|Microsoft.AgFoodPlatform/oauthProviders/write|
+|Microsoft.AgFoodPlatform/oauthProviders/read|
+|Microsoft.AgFoodPlatform/oauthProviders/delete|
+|Microsoft.AgFoodPlatform/oauthTokens/read|
+|Microsoft.AgFoodPlatform/oauthTokens/delete|
+
+### JobProcessesLogs
+ |operationName|
+ | |
+ |Microsoft.AgFoodPlatform/ingestionJobs/satelliteDataIngestionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/satelliteDataDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/ingestionJobs/weatherDataIngestionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/weatherDataDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/oauthProvidersCascadeDeleteJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/oauthTokensRemoveJobs/processed
+ |Microsoft.AgFoodPlatform/ingestionJobs/biomassModelJobs/processed
+ |Microsoft.AgFoodPlatform/ingestionJobs/ImageProcessingRasterizeJobs/processed
+ |Microsoft.AgFoodPlatform/ingestionJobs/farmOperationDataIngestionJobs/processed
+ |Microsoft.AgFoodPlatform/ingestionJobs/farmOperationDataIngestionJobs/processed/failures
+ |Microsoft.AgFoodPlatform/ingestionJobs/farmOperationPeriodicJobs/processed
+ |Microsoft.AgFoodPlatform/ingestionJobs/farmOperationPeriodicJobs/processed/failures
+ |Microsoft.AgFoodPlatform/ingestionJobs/farmOperationEventHandlingJobs/processed
+ |Microsoft.AgFoodPlatform/ingestionJobs/farmOperationEventHandlingJobs/processed/failures
+ |Microsoft.AgFoodPlatform/deletionJobs/applicationDataCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/tillageDataCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/plantingDataCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/harvestDataCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/managementZonesCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/zonesCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/plantTissueAnalysesCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/prescriptionsCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/prescriptionMapsCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/insightsCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/farmersCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/farmsCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/fieldsCascadeDeletionJobs/processed
+ |Microsoft.AgFoodPlatform/deletionJobs/seasonalFieldsCascadeDeletionJobs/processed
+
+### ApplicationAuditLogs
+The operation names corresponding to write operations in other categories are present in this category. These common logs don't contain the request body. These common logs can be correlated using the correlationId field. Some of the control plane operations that aren't part of the rest of the categories are listed below.
+
+|operationName|
+| |
+|Create Data Manager for Agriculture Resource|
+|Update Data Manager for Agriculture Resource|
+|Delete Data Manager for Agriculture Resource|
+|Create Subscription|
+|Update Subscription|
+|Data Plane Authentication|
+
+## Query resource logs in a log analytics workspace
+All the `categories` of resource logs are mapped as a table in log analytics. To access logs for each category, you need to create a diagnostic setting to send data to a log analytics workspace. In this workspace, you can query any of the tables listed to obtain the relevant logs.
+
+### List of tables in log analytics and their mapping to categories in resource logs
+| Table name in log analytics| Categories in resource logs |Description
+| | | |
+|AgriFoodFarmManagementLogs|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Boundary, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses.
+|AgriFoodFarmOperationsLogs|FarmOperationsLogs| Logs for CRUD operations for FarmOperations data ingestion job, ApplicationData, PlantingData, HarvestingData, TillageData.
+|AgriFoodSatelliteLogs|SatelliteLogs| Logs for create and get operations for satellite data ingestion job.
+|AgriFoodWeatherLogs|WeatherLogs|Logs for create, delete and get operations for weather data ingestion job.
+|AgriFoodProviderAuthLogs|ProviderAuthLogs| Logs for create, update, delete, cascade delete, get and get all for oauth providers. It also has logs for get, get all, cascade delete for oauth tokens.
+|AgriFoodInsightLogs|InsightLogs| Logs for get and get all operations for insights.
+|AgriFoodModelInferenceLogs|ModelInferenceLogs| Logs for create and get operations for biomass model job.
+|AgriFoodJobProcessedLogs|JobProcessedLogs| Logs for indicating success or failure and reason of failure for jobs. In addition to logs for resource cascade delete jobs, data-ingestion jobs. It also contains logs for farm operations and event handling jobs.
+|AgriFoodApplicationAuditLogs|ApplicationAuditLogs| Logs for privileged actions such as data-plane resource create, update, delete and subscription management operations.
++
+### List of columns in log analytics tables
+| Field name | Description |
+| | |
+|**Time** |Date and time in UTC. |
+|**ResourceId** |Azure Resource Manager resource ID for Data Manager for Agriculture logs.|
+|**OperationName** |Name of the operation, as documented in the earlier table. |
+|**OperationVersion** |REST API version requested by the client. |
+|**Category** |Category details in the Data Manager for Agriculture logs, this can be any value as listed in the category table. |
+|**ResultType** |Result of the REST API request (success or failure). |
+|**ResultSignature** |HTTP status. |
+|**ResultDescription** |More description about the result, when available. |
+|**DurationMs** |Time it took to service the REST API request, in milliseconds.|
+|**CallerIpAddress** |IP address of the client that made the request. |
+|**Level**|The severity level of the event (informational, warning, error, or critical).|
+|**CorrelationId** |An optional GUID that can be used to correlate logs. |
+|**ApplicationId**| Application ID indicating identity of the caller.|
+|**ObjectId**| Object ID indicating identity of the caller.|
+|**ClientTenantId**| ID of the tenant of the caller.|
+|**SubscriptionId**| ID of the subscription used by the caller.
+|**Location**|The region of the resource emitting the event such as "East US" |
+|**JobRunType**| Available only in `AgriFoodJobProcessesLogs` table, indicates type of the job run. Value can be either of periodic or one time. |
+|**JobId**| Available in`AgriFoodJobProcessesLogs`, `AgriFoodSatelliteLogs`, `AgriFoodWeatherLogs`, and `AgriFoodModelInferenceLogs`, indicates ID of the job. |
+|**InitiatedBy**| Available only in `AgriFoodJobProcessesLogs` table. Indicates whether a job was initiated by a user or by the service. |
+|**partyId**| ID of the party associated with the operation. |
+|**Properties** | Available only in`AgriFoodJobProcessesLogs` table, it contains: `farmOperationEntityId` (ID of the entity that failed to be created by the farmOperation job), `farmOperationEntityType`(Type of the entity that failed to be created, can be ApplicationData, PeriodicJob, etc.), `errorCode`(Code for failure of the job at Data Manager for Agriculture end),`errorMessage`(Description of failure at the Data Manager for Agriculture end),`internalErrorCode`(Code of failure of the job provide by the provider),`internalErrorMessage`(Description of the failure provided by the provider),`providerId`(ID of the provider such as JOHN-DEERE). |
+
+Each of these tables can be queried by creating a log analytics workspace. Reference for query language is [here](https://learn.microsoft.com/azure/data-explorer/kql-quick-reference).
+
+### List of sample queries in the log analytics workspace
+| Query name | Description |
+| | |
+|**Status of farm management operations for a party** |Fetches a count of successes and failures of operations within the `FarmManagementLogs` category for each party.
+|**Job execution statistics for a party**| Provides a count of successes and failures of for all operations in the `JobProcessedLogs` category for each party.
+|**Failed Authorization**|Identifies a list of users who failed to access your resource and the reason for this failure.
+|**Status of all operations for a party**|Aggregates failures and successes across categories for a party.
+|**Usage trends for top 100 parties based on the operations performed**|Retrieves a list of top 100 parties based on the number of hits received across categories. This query can be edited to track trend of usage for a particular party.|
+
+All the queries listed above can be used as base queries to form custom queries in a log analytics workspace. This list of queries can also be accessed in the `Logs` tab in your Azure Data Manager for Agriculture resource on Azure portal.
+
+## Next steps
+
+Learn how to [setup private links](./how-to-set-up-private-links.md).
defender-for-iot Tutorial Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-alerts.md
In this tutorial you'll learn how to:
- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md). -- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md).
- You must have [created a Defender for IoT micro agent module twin](quickstart-create-micro-agent-module-twin.md). -- You must have [installed the Defender for IoT micro agent](quickstart-standalone-agent-binary-installation.md)
+- You must have [installed the Defender for IoT micro agent](quickstart-standalone-agent-binary-installation.md).
-- You must have [configured the Microsoft Defender for IoT agent-based solution](how-to-configure-agent-based-solution.md)
+- You must have [configured the Microsoft Defender for IoT agent-based solution](how-to-configure-agent-based-solution.md).
-- Learned how to [investigate security recommendations](quickstart-investigate-security-recommendations.md)
+- Learned how to [investigate security recommendations](quickstart-investigate-security-recommendations.md).
## Investigate security alerts
The Defender for IoT security alert list displays all of the aggregated security
## Investigate security alert details
-Opening each aggregated alert displays the detailed alert description, remediation steps, and device ID for each device that triggered an alert. The alert severity, and direct investigation is accessible using Log Analytics.
+Opening each aggregated alert displays the detailed alert description, remediation steps, and device ID for each device that triggered an alert. The alert severity and direct investigation is accessible using Log Analytics.
**To investigate security alert details**:
Opening each aggregated alert displays the detailed alert description, remediati
1. Select any security alert from the list to open it.
-1. Review the alert **description**, **severity**, **source of the detection**, **device details** of all devices that issued this alert in the aggregation period.
+1. Review the alert **description**, **severity**, **source of the detection**, and **device details** of all devices that issued this alert in the aggregation period.
:::image type="content" source="media/quickstart/drill-down-iot-alert-details.png" alt-text="Investigate and review the details of each device in an aggregated alert." lightbox="media/quickstart/drill-down-iot-alert-details-expanded.png":::
-1. After reviewing the alert specifics, use the **manual remediation step** instructions to help remediate, and resolve the issue that caused the alert.
+1. After reviewing the alert specifics, use the **manual remediation step** instructions to help remediate and resolve the issue that caused the alert.
:::image type="content" source="media/quickstart/iot-alert-manual-remediation-steps.png" alt-text="Follow the manual remediation steps to help resolve or remediate your device security alerts":::
-## Investigate alerts in Log Analytics workspace
+## Investigate alerts in your Log Analytics workspace
You can access your alerts and investigate them with the Log Analytics workspace.
defender-for-iot Tutorial Investigate Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-recommendations.md
In this tutorial you'll learn how to:
> [!div class="checklist"] > - Investigate new recommendations > - Investigate security recommendation details
-> - Investigate recommendations in Log Analytics workspace
+> - Investigate recommendations in a Log Analytics workspace
> [!NOTE] > The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
In this tutorial you'll learn how to:
- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md). -- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md).
- You must have [created a Defender for IoT micro agent module twin](quickstart-create-micro-agent-module-twin.md). -- You must have [installed the Defender for IoT micro agent](quickstart-standalone-agent-binary-installation.md)
+- You must have [installed the Defender for IoT micro agent](quickstart-standalone-agent-binary-installation.md).
-- You must have [configured the Microsoft Defender for IoT agent-based solution](how-to-configure-agent-based-solution.md)
+- You must have [configured the Microsoft Defender for IoT agent-based solution](how-to-configure-agent-based-solution.md).
## Investigate recommendations
The IoT Hub recommendations list displays all of the aggregated security recomme
## Investigate security recommendation details
-Open each aggregated recommendation to display the detailed recommendation description, remediation steps, device ID for each device that triggered a recommendation. It also displays recommendation severity and direct-investigation access using Log Analytics.
+Open each aggregated recommendation to display the detailed recommendation description, remediation steps, and device ID for each device that triggered a recommendation. It also displays recommendation severity and direct-investigation access using Log Analytics.
1. Sign in to the [Azure portal](https://portal.azure.com/).
Open each aggregated recommendation to display the detailed recommendation descr
:::image type="content" source="media/quickstart/explore-security-recommendation-detail-inline.png" alt-text="Investigate specific security recommendations for a device with Defender for IoT" lightbox="media/quickstart/explore-security-recommendation-detail-expanded.png":::
-## Investigate recommendations in Log Analytics workspace
+## Investigate recommendations in a Log Analytics workspace
-**To access your recommendations in Log Analytics workspace**:
+**To access your recommendations in a Log Analytics workspace**:
1. Sign in to the [Azure portal](https://portal.azure.com/).
defender-for-iot Tutorial Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-standalone-agent-binary-installation.md
Depending on your setup, the appropriate Microsoft package will need to be insta
1. Download the repository configuration that matches your device operating system.
- - For Ubuntu 18.04
+ - For Ubuntu 18.04:
```bash curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list ```
- - For Ubuntu 20.04
+ - For Ubuntu 20.04:
```bash curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > ./microsoft-prod.list ```
- - For Debian 9 (both AMD64 and ARM64)
+ - For Debian 9 (both AMD64 and ARM64):
```bash curl https://packages.microsoft.com/config/debian/stretch/multiarch/prod.list > ./microsoft-prod.list
Depending on your setup, the appropriate Microsoft package will need to be insta
## Connect via a proxy
-This procedure describes how you can connect the Defender for IoT micro-agent to the IoT Hub via a proxy.
+This procedure describes how you can connect the Defender for IoT micro agent to the IoT Hub via a proxy.
**To configure connections via a proxy**:
-1. On your micro-agent machine, create a `/etc/defender_iot_micro_agent/conf.json` file with the following content:
+1. On your micro agent machine, create a `/etc/defender_iot_micro_agent/conf.json` file with the following content:
```json {
This procedure describes how you can connect the Defender for IoT micro-agent to
1. Delete any cached file at **/var/lib/defender_iot_micro_agent/cache.json**.
-1. Restart the micro-agent. Run:
+1. Restart the micro agent. Run:
```bash sudo systemctl restart defender-iot-micro-agent.service
This procedure describes additional steps required to support the AMQP protocol.
**To add AMQP protocol support**:
-1. On your micro-agent machine, open the `/etc/defender_iot_micro_agent/conf.json` file and add the following content:
+1. On your micro agent machine, open the `/etc/defender_iot_micro_agent/conf.json` file and add the following content:
```json {
This procedure describes additional steps required to support the AMQP protocol.
``` 1. Delete any cached file at **/var/lib/defender_iot_micro_agent/cache.json**.
-1. Restart the micro-agent. Run
+1. Restart the micro agent. Run:
```bash sudo systemctl restart defender-iot-micro-agent.service
This procedure describes additional steps required to support the AMQP protocol.
**To add AMQP over web socket protocol support**:
-1. On your micro-agent machine, open the `/etc/defender_iot_micro_agent/conf.json` file and add the following content:
+1. On your micro agent machine, open the `/etc/defender_iot_micro_agent/conf.json` file and add the following content:
```json {
This procedure describes additional steps required to support the AMQP protocol.
``` 1. Delete any cached file at **/var/lib/defender_iot_micro_agent/cache.json**.
-1. Restart the micro-agent. Run
+1. Restart the micro agent. Run:
```bash sudo systemctl restart defender-iot-micro-agent.service
Http Proxy configuration is supported for this protocol, in the case that proxy
There are two options that can be used to authenticate the Defender for IoT micro agent: -- [Module identity connection string](#authenticate-using-a-module-identity-connection-string).
+- [Authenticate using a module identity connection string](#authenticate-using-a-module-identity-connection-string).
- [Authenticate using a certificate](#authenticate-using-a-certificate).
You will need to copy the module identity connection string from the DefenderIoT
**To copy the module identity's connection string**:
-1. Navigate to the **IoT Hub** > **`Your hub`** > **Device management** > **Devices** .
+1. Navigate to the **IoT Hub** > **`Your hub`** > **Device management** > **Devices**.
:::image type="content" source="media/quickstart-standalone-agent-binary-installation/iot-devices.png" alt-text="Select IoT devices from the left-hand menu.":::
You will need to copy the module identity connection string from the DefenderIoT
The `connection_string.txt` will now be located in the following path location `/etc/defender_iot_micro_agent/connection_string.txt`.
- **Please note that the connection string includes a key that enables direct access to the module itself, therefore includes sensitive information that should only be used and readable by root users.**
+ > [!NOTE]
+ > The connection string includes a key that enables direct access to the module itself, therefore includes sensitive information that should only be used and readable by root users.
1. Restart the service using this command:
deployment-environments How To Manage Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md
Last updated 02/28/2023
-# Manage your environment
+# Manage your deployment environment
In Azure Deployment Environments Preview, a development infrastructure admin gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the pre-configured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type.
load-balancer Quickstart Basic Internal Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-cli.md
description: This quickstart shows how to create an internal basic load balancer
Previously updated : 03/24/2022 Last updated : 04/10/2023 -+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs. # Quickstart: Create an internal basic load balancer to load balance VMs by using the Azure CLI
load-balancer Quickstart Basic Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md
Previously updated : 03/21/2022 Last updated : 04/10/2023 #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Basic Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-powershell.md
description: This quickstart shows how to create an internal basic load balancer
Previously updated : 03/24/2022 Last updated : 04/10/2023 #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Basic Public Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-cli.md
Previously updated : 03/16/2022 Last updated : 04/10/2023
load-balancer Quickstart Basic Public Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-portal.md
Previously updated : 03/15/2022 Last updated : 04/10/2023
load-balancer Quickstart Basic Public Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-powershell.md
description: This quickstart shows how to create a basic internal load balancer using Azure PowerShell Previously updated : 02/03/2023 Last updated : 04/10/2023
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-cli.md
Previously updated : 03/31/2022 Last updated : 04/10/2023
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-powershell.md
description: This article shows how deploy an IPv6 dual stack application in Azu
Previously updated : 03/31/2022 Last updated : 04/10/2023
logic-apps Create Automation Tasks Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-automation-tasks-azure-resources.md
> This capability is in preview and is subject to the > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-To help you manage [Azure resources](../azure-resource-manager/management/overview.md#terminology) more easily, you can create automated management tasks for a specific resource or resource group. These tasks vary in number and availability, based on the resource type. For example, for an [Azure storage account](../storage/common/storage-account-overview.md), you can set up an automation task that sends the monthly cost for that storage account. For an [Azure virtual machine](https://azure.microsoft.com/services/virtual-machines/), you can create an automation task that turns on or turns off that virtual machine on a predefined schedule.
+To help you manage [Azure resources](../azure-resource-manager/management/overview.md#terminology) more easily, you can create automated management tasks for a specific resource or resource group. These tasks vary in number and availability, based on the resource type. For example:
+
+- For an [Azure storage account](../storage/common/storage-account-overview.md), you can set up an automation task that sends the monthly cost for that storage account.
+
+- For an [Azure virtual machine](../virtual-machines/overview.md), you can create an automation task that turns on or turns off that virtual machine on a predefined schedule. Specifically, you can create a task that automatically starts or stops the virtual machine a specific number of times every day, week, or month. On the task's **Configure** tab, set the **Interval** value to the number of times and the **Frequency** value to **Day**, **Week**, or **Month**. The automation task continues to work until you delete or disable the task.
+
+ For example, you can create a task that automatically starts a virtual machine once every day. On the task's **Configure** tab, set **Interval** to **1** and **Frequency** to **Day**.
You can create an automation task from a specific automation task template. The following table lists the currently supported resource types and available task templates in this preview:
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
You can use Azure Machine Learning CLI command `k8s-extension create` to deploy
| `allowInsecureConnections` |`True` or `False`, default `False`. **Can** be set to `True` to use inference HTTP endpoints for development or test purposes. |N/A| Optional | Optional | | `inferenceRouterServiceType` |`loadBalancer`, `nodePort` or `clusterIP`. **Required** if `enableInference=True`. | N/A| **&check;** | **&check;** | | `internalLoadBalancerProvider` | This config is only applicable for Azure Kubernetes Service(AKS) cluster now. Set to `azure` to allow the inference router using internal load balancer. | N/A| Optional | Optional |
- |`sslSecret`| The name of the Kubernetes secret in the `azureml` namespace. This config is used to store `cert.pem` (PEM-encoded TLS/SSL cert) and `key.pem` (PEM-encoded TLS/SSL key), which are required for inference HTTPS endpoint support when ``allowInsecureConnections`` is set to `False`. For a sample YAML definition of `sslSecret`, see [Configure sslSecret](./how-to-secure-kubernetes-online-endpoint.md#configure-sslsecret). Use this config or a combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional |
+ |`sslSecret`| The name of the Kubernetes secret in the `azureml` namespace. This config is used to store `cert.pem` (PEM-encoded TLS/SSL cert) and `key.pem` (PEM-encoded TLS/SSL key), which are required for inference HTTPS endpoint support when ``allowInsecureConnections`` is set to `False`. For a sample YAML definition of `sslSecret`, see [Configure sslSecret](./how-to-secure-kubernetes-online-endpoint.md). Use this config or a combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional |
|`sslCname` |An TLS/SSL CNAME is used by inference HTTPS endpoint. **Required** if `allowInsecureConnections=False` | N/A | Optional | Optional| | `inferenceRouterHA` |`True` or `False`, default `True`. By default, Azure Machine Learning extension will deploy three inference router replicas for high availability, which requires at least three worker nodes in a cluster. Set to `False` if your cluster has fewer than three worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional | |`nodeSelector` | By default, the deployed kubernetes resources and your machine learning workloads are randomly deployed to one or more nodes of the cluster, and DaemonSet resources are deployed to ALL nodes. If you want to restrict the extension deployment and your training/inference workloads to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional |
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
In general, files in MLflow are called artifacts. You can log artifacts in multi
|Log all the artifacts in an existing folder | `mlflow.log_artifacts("path/to/folder")`| Folder structure is copied to the run, but the root folder indicated is not included. | > [!TIP]
-> When __loggiging large files__, you may encounter the error `Failed to flush the queue within 300 seconds`. Usually, it means the operation is timing out before the upload of the file is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_VALUE`.
+> When __loggiging large files__ with `log_artifact` or `log_model`, you may encounter time out errors before the upload of the file is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_TIMEOUT`. It's default value is `300` (seconds).
## Logging models
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
[!INCLUDE [cli v2](../../includes/machine-learning-dev-v2.md)]
-Batch Endpoints can be used for processing tabular data, but also any other file type like text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
+Batch Endpoints can be used for processing tabular data that contain text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
## About this sample
The model we are going to work with was built using the popular library transfor
* It is trained for summarization of text in English. * We are going to use Torch as a backend.
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch/deploy-models/huggingface-text-summarization` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization` if you are using our SDK for Python.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the [`cli/endpoints/batch/deploy-models/huggingface-text-summarization`](https://github.com/azure/azureml-examples/tree/main/cli/endpoints/batch/deploy-models/huggingface-text-summarization) if you are using the Azure CLI or [`sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization`](https://github.com/azure/azureml-examples/tree/main/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization) if you are using our SDK for Python.
+
+# [Azure CLI](#tab/cli)
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/cli/endpoints/batch/deploy-models/huggingface-text-summarization ```
+# [Python](#tab/python)
+
+In a Jupyter notebook:
+
+```python
+!git clone https://github.com/Azure/azureml-examples --depth 1
+!cd azureml-examples/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization
+```
+++ ### Follow along in Jupyter Notebooks You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [text-summarization-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization/text-summarization-batch.ipynb).
You can follow along this sample in a Jupyter Notebook. In the cloned repository
First, let's connect to Azure Machine Learning workspace where we're going to work on.
-# [Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/cli)
```azurecli az account set --subscription <subscription>
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group,
### Registering the model
-Due to the size of the model, it hasn't been included in this repository. Instead, you can generate a local copy with the following code. A local copy of the model will be placed at `model`. We will use it during the course of this tutorial.
+Due to the size of the model, it hasn't been included in this repository. Instead, you can download a copy from the HuggingFace model's hub. You need the packages `transformers` and `torch` installed in the environment you are using.
+
+```python
+%pip install transformers torch
+```
+
+Use the following code to download the model to a folder `model`:
```python from transformers import pipeline
MODEL_NAME='bart-text-summarization'
az ml model create --name $MODEL_NAME --path "model" ```
-# [Python](#tab/sdk)
+# [Python](#tab/python)
```python model_name = 'bart-text-summarization'
We are going to create a batch endpoint named `text-summarization-batch` where t
1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
- # [Azure CLI](#tab/azure-cli)
+ # [Azure CLI](#tab/cli)
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
We are going to create a batch endpoint named `text-summarization-batch` where t
1. Configure your batch endpoint
- # [Azure CLI](#tab/azure-cli)
+ # [Azure CLI](#tab/cli)
The following YAML file defines a batch endpoint:
We are going to create a batch endpoint named `text-summarization-batch` where t
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deploy-and-run.sh" ID="create_batch_endpoint" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python ml_client.batch_endpoints.begin_create_or_update(endpoint)
Let's create the deployment that will host the model:
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deployment.yml" range="7-10" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
Let's get a reference to the environment:
Let's create the deployment that will host the model:
1. Each deployment runs on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). In this example, our model can benefit from GPU acceleration, which is why we will use a GPU cluster.
- # [Azure CLI](#tab/azure-cli)
+ # [Azure CLI](#tab/cli)
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deploy-and-run.sh" ID="create_compute" :::
Let's create the deployment that will host the model:
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deploy-and-run.sh" ID="create_batch_deployment_set_default" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
To create a new deployment with the indicated environment and scoring script use the following code:
Let's create the deployment that will host the model:
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME ```
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python endpoint.defaults.deployment_name = deployment.name
For testing our endpoint, we are going to use a sample of the dataset [BillSum:
> [!NOTE] > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python input = Input(type=AssetTypes.URI_FOLDER, path="data")
For testing our endpoint, we are going to use a sample of the dataset [BillSum:
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/deploy-and-run.sh" ID="show_job_in_studio" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python ml_client.jobs.get(job.name)
For testing our endpoint, we are going to use a sample of the dataset [BillSum:
az ml job download --name $JOB_NAME --output-name score --download-path . ```
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
machine-learning How To Troubleshoot Kubernetes Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-compute.md
# Troubleshoot Kubernetes Compute
-In this article, you'll learn how to troubleshoot common problems you may encounter with using [Kubernetes compute](./how-to-attach-kubernetes-to-workspace.md) for training jobs and model deployments.
+In this article, you'll learn how to troubleshoot common workload (including training jobs and endpoints) errors on the [Kubernetes compute](./how-to-attach-kubernetes-to-workspace.md).
## Inference guide
-### How to check sslCertPemFile and sslKeyPemFile is correct?
-Use the commands below to run a baseline check for your cert and key. This is to allow for any known errors to be surfaced. Expect the second command to return "RSA key ok" without prompting you for password.
-
-```bash
-openssl x509 -in cert.pem -noout -text
-openssl rsa -in key.pem -noout -check
-```
-
-Run the commands below to verify whether sslCertPemFile and sslKeyPemFile are matched:
-
-```bash
-openssl x509 -in cert.pem -noout -modulus | md5sum
-openssl rsa -in key.pem -noout -modulus | md5sum
-```
+The common Kubernetes endpoint errors on Kubernetes compute are categorized into two scopes: **compute scope** and **cluster scope**. The compute scope errors are related to the compute target, such as the compute target is not found, or the compute target is not accessible. The cluster scope errors are related to the underlying Kubernetes cluster, such as the cluster itself is not reachable, or the cluster is not found.
### Kubernetes compute errors
You can check the following items to troubleshoot the issue:
> [!TIP] > More troubleshoot guide of common errors when creating/updating the Kubernetes online endpoints and deployments, you can find in [How to troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md).
+### How to check sslCertPemFile and sslKeyPemFile is correct?
+Use the commands below to run a baseline check for your cert and key. This is to allow for any known errors to be surfaced. Expect the second command to return "RSA key ok" without prompting you for password.
+
+```bash
+openssl x509 -in cert.pem -noout -text
+openssl rsa -in key.pem -noout -check
+```
+
+Run the commands below to verify whether sslCertPemFile and sslKeyPemFile are matched:
+
+```bash
+openssl x509 -in cert.pem -noout -modulus | md5sum
+openssl rsa -in key.pem -noout -modulus | md5sum
+```
+ ## Training guide
-### Job retry
+When the training job is running, you can check the job status in the workspace portal. When you encounter some abnormal job status, such as the job retried multiple times, or the job has been stuck in initializing state, or even the job has eventually failed, you can follow the guide below to troubleshoot the issue.
+
+### Job retry debugging
If the training job pod running in the cluster was terminated due to the node running to node OOM (out of memory), the job will be **automatically retried** to another available node.
The host name of the node which the job pod is running on will be indicated in t
"ask-agentpool-17631869-vmss0000" represents the **node host name** running this job in your AKS cluster. Then you can access the cluster to check about the node status for further investigation.
-### UserError
-#### Azure Machine Learning Kubernetes job failed. E45004
+### Job pod get stuck in Init state
-If the error message is:
+If the job runs longer than you expected and if you find that your job pods are getting stuck in an Init state with this warning `Unable to attach or mount volumes: *** failed to get plugin from volumeSpec for volume ***-blobfuse-*** err=no volume plugin matched`, the issue might be occurring because Azure Machine Learning extension doesn't support download mode for input data.
-```bash
-Azure Machine Learning Kubernetes job failed. E45004:"Training feature is not enabled, please enable it when install the extension."
-```
+To resolve this issue, change to mount mode for your input data.
-Please check whether you have `enableTraining=True` set when doing the Azure Machine Learning extension installation. More details could be found at [Deploy Azure Machine Learning extension on AKS or Arc Kubernetes cluster](how-to-deploy-kubernetes-extension.md)
-#### Unable to mount data store workspaceblobstore. Give either an account key or SAS token
+### Common job failure errors
-If you need to access Azure Container Registry (ACR) for Docker image, and Storage Account for training data, this issue should occur when the compute is not specified with a managed identity. This is because machine learning workspace default storage account without any credentials is not supported for training jobs.
+Below is a list of common error types that you might encounter when using Kubernetes compute to create and execute a training job, which you can trouble shoot by following the guideline:
-To mitigate this issue, you can assign Managed Identity to the compute in compute attach step, or you can assign Managed Identity to the compute after it has been attached. More details could be found at [Assign Managed Identity to the compute target](how-to-attach-kubernetes-to-workspace.md#assign-managed-identity-to-the-compute-target).
+* [Job failed. 137](#job-failed-137)
+* [Job failed. E45004](#job-failed-e45004)
+* [Job failed. 400](#job-failed-400)
+* [Give either an account key or SAS token](#give-either-an-account-key-or-sas-token)
+* [AzureBlob authorization failed](#azureblob-authorization-failed)
-#### Unable to upload project files to working directory in AzureBlob because the authorization failed
+#### Job failed. 137
If the error message is: ```bash
-Unable to upload project files to working directory in AzureBlob because the authorization failed.
+Azure Machine Learning Kubernetes job failed. 137:PodPattern matched: {"containers":[{"name":"training-identity-sidecar","message":"Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n * Serving Flask app 'msi-endpoint-server' (lazy loading)\n * Environment: production\n WARNING: This is a development server. Do not use it in a production deployment.\n Use a production WSGI server instead.\n * Debug mode: off\n * Running on http://127.0.0.1:12342/ (Press CTRL+C to quit)\n","code":137}]}
```
-You can check the following items to troubleshoot the issue:
-* Make sure the storage account has enabled the exceptions of ΓÇ£Allow Azure services on the trusted service list to access this storage accountΓÇ¥ and the workspace is in the resource instances list.
-* Make sure the workspace has a system assigned managed identity.
+Check your proxy setting and check whether 127.0.0.1 was added to proxy-skip-range when using `az connectedk8s connect` by following this [network configuring](how-to-access-azureml-behind-firewall.md#scenario-use-kubernetes-compute).
-### Encountered an error when attempting to connect to the Azure Machine Learning token service
+#### Job failed. E45004
+
+If the error message is:
+
+```bash
+Azure Machine Learning Kubernetes job failed. E45004:"Training feature is not enabled, please enable it when install the extension."
+```
+
+Please check whether you have `enableTraining=True` set when doing the Azure Machine Learning extension installation. More details could be found at [Deploy Azure Machine Learning extension on AKS or Arc Kubernetes cluster](how-to-deploy-kubernetes-extension.md)
+
+### Job failed. 400
If the error message is:
Azure Machine Learning Kubernetes job failed. 400:{"Msg":"Encountered an error w
``` You can follow [Private Link troubleshooting section](#private-link-issue) to check your network settings.
-### ServiceError
+#### Give either an account key or SAS token
-#### Job pod get stuck in Init state
+If you need to access Azure Container Registry (ACR) for Docker image, and to access the Storage Account for training data, this issue should occur when the compute is not specified with a managed identity.
-If the job runs longer than you expected and if you find that your job pods are getting stuck in an Init state with this warning `Unable to attach or mount volumes: *** failed to get plugin from volumeSpec for volume ***-blobfuse-*** err=no volume plugin matched`, the issue might be occurring because Azure Machine Learning extension doesn't support download mode for input data.
+To access Azure Container Registry (ACR) from a Kubernetes compute cluster for Docker images, or access a storage account for training data, you need to attach the Kubernetes compute with a system-assigned or user-assigned managed identity enabled.
-To resolve this issue, change to mount mode for your input data.
+In the above training scenario, this **computing identity** is necessary for Kubernetes compute to be used as a credential to communicate between the ARM resource bound to the workspace and the Kubernetes computing cluster. So without this identity, the training job will fail and report missing account key or sas token. Take accessing storage account for example, if you don't specify a managed identity to your Kubernetes compute, the job fails with the following error message:
-#### Azure Machine Learning Kubernetes job failed
+```bash
+Unable to mount data store workspaceblobstore. Give either an account key or SAS token
+```
-If the error message is:
+This is because machine learning workspace default storage account without any credentials is not accessible for training jobs in Kubernetes compute.
+
+To mitigate this issue, you can assign Managed Identity to the compute in compute attach step, or you can assign Managed Identity to the compute after it has been attached. More details could be found at [Assign Managed Identity to the compute target](how-to-attach-kubernetes-to-workspace.md#assign-managed-identity-to-the-compute-target).
+
+#### AzureBlob authorization failed
+
+If you need to access the AzureBlob for data upload or download in your training jobs on Kubernetes compute, then the job fails with the following error message:
```bash
-Azure Machine Learning Kubernetes job failed. 137:PodPattern matched: {"containers":[{"name":"training-identity-sidecar","message":"Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n * Serving Flask app 'msi-endpoint-server' (lazy loading)\n * Environment: production\n WARNING: This is a development server. Do not use it in a production deployment.\n Use a production WSGI server instead.\n * Debug mode: off\n * Running on http://127.0.0.1:12342/ (Press CTRL+C to quit)\n","code":137}]}
+Unable to upload project files to working directory in AzureBlob because the authorization failed.
```
-Check your proxy setting and check whether 127.0.0.1 was added to proxy-skip-range when using `az connectedk8s connect` by following this [network configuring](how-to-access-azureml-behind-firewall.md#scenario-use-kubernetes-compute).
+This is because the authorization failed when the job tries to upload the project files to the AzureBlob. You can check the following items to troubleshoot the issue:
+* Make sure the storage account has enabled the exceptions of ΓÇ£Allow Azure services on the trusted service list to access this storage accountΓÇ¥ and the workspace is in the resource instances list.
+* Make sure the workspace has a system assigned managed identity.
## Private link issue
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Although we do our best to provide a stable and reliable service, sometimes thin
## Common errors specific to Kubernetes deployments
+Errors regarding to identity and authentication:
* [ACRSecretError](#error-acrsecreterror)
+* [TokenRefreshFailed](#error-tokenrefreshfailed)
+* [GetAADTokenFailed](#error-getaadtokenfailed)
+* [ACRAuthenticationChallengeFailed](#error-acrauthenticationchallengefailed)
+* [ACRTokenExchangeFailed](#error-acrtokenexchangefailed)
+
+Errors regarding to crashloopbackoff:
* [ImagePullLoopBackOff](#error-imagepullloopbackoff) * [DeploymentCrashLoopBackOff](#error-deploymentcrashloopbackoff) * [KubernetesCrashLoopBackOff](#error-kubernetescrashloopbackoff)
-* [NamespaceNotFound](#error-namespacenotfound)
+
+Errors regarding to scoring script:
* [UserScriptInitFailed](#error-userscriptinitfailed) * [UserScriptImportError](#error-userscriptimporterror) * [UserScriptFunctionNotFound](#error-userscriptfunctionnotfound)+
+Others:
+* [NamespaceNotFound](#error-namespacenotfound)
* [EndpointAlreadyExists](#error-endpointalreadyexists) * [ScoringFeUnhealthy](#error-scoringfeunhealthy) * [ValidateScoringFailed](#error-validatescoringfailed)
This is a list of reasons you might run into this error when creating/updating t
* The Kubernetes cluster has improper network configuration, please check the proxy, network policy or certificate. * If you are using a private AKS cluster, it is necessary to set up private endpoints for ACR, storage account, workspace in the AKS vnet.
+### ERROR: TokenRefreshFailed
+
+This is because extension cannot get principal credential from Azure because the Kubernetes cluster identity is not set properly, please re-install the [Azure Machine Learning extension](../machine-learning/how-to-deploy-kubernetes-extension.md) and try again.
++
+### ERROR: GetAADTokenFailed
+
+This is because the Kubernetes cluster request AAD token failed or timeout, please check your network accessibility then try again.
+
+* You can follow the [Configure required network traffic](../machine-learning/how-to-access-azureml-behind-firewall.md#scenario-use-kubernetes-compute ) to check the outbound proxy, make sure the cluster can connect to workspace.
+* The workspace endpoint url can be found in online endpoint CRD in cluster.
+
+If your workspace is a private workspace which disabled public network access, the Kubernetes cluster should only communicate with that private workspace through the private link.
+
+* You can check if the workspace access allows public access, no matter if an AKS cluster itself is public or private, it cannot access the private workspace.
+* More information you can refer to [Secure Azure Kubernetes Service inferencing environment](../machine-learning/how-to-secure-kubernetes-inferencing-environment.md#what-is-a-secure-aks-inferencing-environment)
+
+### ERROR: ACRAuthenticationChallengeFailed
+
+This is because the Kubernetes cluster cannot reach ACR service of the workspace to do authentication challenge. Please check your network, especially the ACR public network access, then try again.
+
+You can follow the troubleshooting steps in [GetAADTokenFailed](#error-getaadtokenfailed) to check the network.
+
+### ERROR: ACRTokenExchangeFailed
+
+This is because the Kubernetes cluster exchange ACR token failed because AAD token is unauthorized yet, since the role assignment takes some time, so you can wait a moment then try again.
+
+This failure may also be due to too many requests to the ACR service at that time, it should be a transient error, you can try again later.
+ ### ERROR: ImagePullLoopBackOff The reason you might run into this error when creating/updating Kubernetes online deployments is because you can't download the images from the container registry, resulting in the images pull failure.
machine-learning Samples Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-designer.md
Last updated 10/21/2021 + # Example pipelines & datasets for Azure Machine Learning designer Use the built-in examples in Azure Machine Learning designer to quickly get started building your own machine learning pipelines. The Azure Machine Learning designer [GitHub repository](https://github.com/Azure/MachineLearningDesigner) contains detailed documentation to help you understand some common machine learning scenarios.
The sample datasets are available under **Datasets**-**Samples** category. You c
|-|:--| | Adult Census Income Binary Classification dataset | A subset of the 1994 Census database, using working adults over the age of 16 with an adjusted income index of > 100.<br/>**Usage**: Classify people using demographics to predict whether a person earns over 50K a year.<br/> **Related Research**: Kohavi, R., Becker, B., (1996). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science| |Automobile price data (Raw)|Information about automobiles by make and model, including the price, features such as the number of cylinders and MPG, as well as an insurance risk score.<br/> The risk score is initially associated with auto price. It is then adjusted for actual risk in a process known to actuaries as symboling. A value of +3 indicates that the auto is risky, and a value of -3 that it is probably safe.<br/>**Usage**: Predict the risk score by features, using regression or multivariate classification.<br/>**Related Research**: Schlimmer, J.C. (1987). [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science. |
-| CRM Appetency Labels Shared |Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_appetency.labels](http://www.sigkdd.org/site/2009/files/orange_small_train_appetency.labels)).|
-|CRM Churn Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_churn.labels](http://www.sigkdd.org/site/2009/files/orange_small_train_churn.labels)).|
-|CRM Dataset Shared | This data comes from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train.data.zip](http://www.sigkdd.org/site/2009/files/orange_small_train.data.zip)). <br/>The dataset contains 50K customers from the French Telecom company Orange. Each customer has 230 anonymized features, 190 of which are numeric and 40 are categorical. The features are very sparse. |
-|CRM Upselling Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_large_train_upselling.labels](http://www.sigkdd.org/site/2009/files/orange_large_train_upselling.labels)|
+| CRM Appetency Labels Shared |Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_appetency.labels](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train_appetency.labels)).|
+|CRM Churn Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train_churn.labels](https://kdd.org/cupfiles/KDDCupData/2009/files/orange_small_train_churn.labels)).|
+|CRM Dataset Shared | This data comes from the KDD Cup 2009 customer relationship prediction challenge ([orange_small_train.data.zip](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train.data.zip)). <br/>The dataset contains 50K customers from the French Telecom company Orange. Each customer has 230 anonymized features, 190 of which are numeric and 40 are categorical. The features are very sparse. |
+|CRM Upselling Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_large_train_upselling.labels](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train_upselling.labels)|
|Flight Delays Data|Passenger flight on-time performance data taken from the TranStats data collection of the U.S. Department of Transportation ([On-Time](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)).<br/>The dataset covers the time period April-October 2013. Before uploading to the designer, the dataset was processed as follows: <br/>- The dataset was filtered to cover only the 70 busiest airports in the continental US <br/>- Canceled flights were labeled as delayed by more than 15 minutes <br/>- Diverted flights were filtered out <br/>- The following columns were selected: Year, Month, DayofMonth, DayOfWeek, Carrier, OriginAirportID, DestAirportID, CRSDepTime, DepDelay, DepDel15, CRSArrTime, ArrDelay, ArrDel15, Canceled| |German Credit Card UCI dataset|The UCI Statlog (German Credit Card) dataset ([Statlog+German+Credit+Data](https://archive.ics.uci.edu/ml/datasets/Statlog+(German+Credit+Data))), using the german.data file.<br/>The dataset classifies people, described by a set of attributes, as low or high credit risks. Each example represents a person. There are 20 features, both numerical and categorical, and a binary label (the credit risk value). High credit risk entries have label = 2, low credit risk entries have label = 1. The cost of misclassifying a low risk example as high is 1, whereas the cost of misclassifying a high risk example as low is 5.| |IMDB Movie Titles|The dataset contains information about movies that were rated in Twitter tweets: IMDB movie ID, movie name, genre, and production year. There are 17K movies in the dataset. The dataset was introduced in the paper "S. Dooms, T. De Pessemier and L. Martens. MovieTweetings: a Movie Rating Dataset Collected From Twitter. Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys 2013."|
The sample datasets are available under **Datasets**-**Samples** category. You c
## Next steps Learn the fundamentals of predictive analytics and machine learning with [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md)+
mariadb Quickstart Create Mariadb Server Database Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-arm-template.md
Title: 'Quickstart: Create an Azure DB for MariaDB - ARM template'
+ Title: 'Quickstart: Create an Azure Database for MariaDB - ARM template'
description: In this Quickstart article, learn how to create an Azure Database for MariaDB server by using an Azure Resource Manager template.
mysql How To Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md
Azure Database for MySQL supports connecting your Azure Database for MySQL serve
## Step 1: Obtain SSL certificate
-Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive (this tutorial uses c:\ssl for example).
+Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt.pem](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive (this tutorial uses c:\ssl for example).
**For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
-See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
+See the following links for certificates for servers in sovereign clouds: [Azure Government](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
## Step 2: Bind SSL
mysql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-with-managed-identity.md
To configure the identity in the following steps, use the [az identity show](/cl
```azurecli # Get resource ID of the user-assigned identity
-resourceID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query id --output tsv)
+RESOURCE_ID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query id --output tsv)
# Get client ID of the user-assigned identity
-clientID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query clientId --output tsv)
+CLIENT_ID=$(az identity show --resource-group myResourceGroup --name myManagedIdentity --query clientId --output tsv)
``` We can now assign the user-assigned identity to the VM with the [az vm identity assign](/cli/azure/vm/identity#az-vm-identity-assign) command: ```azurecli
-az vm identity assign --resource-group myResourceGroup --name myVM --identities $resourceID
+az vm identity assign --resource-group myResourceGroup --name myVM --identities $RESOURCE_ID
``` To finish setup, show the value of the Client ID, which you'll need in the next few steps: ```bash
-echo $clientID
+echo $CLIENT_ID
``` ## Creating a MySQL user for your Managed Identity
For testing purposes, you can run the following commands in your shell. Note you
# Retrieve the access token
-accessToken=$(curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=CLIENT_ID' -H Metadata:true | jq -r .access_token)
+ACCESS_TOKEN=$(curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=CLIENT_ID' -H Metadata:true | jq -r .access_token)
# Connect to the database
networking Working Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/working-remotely-support.md
Title: Enable remote work by using Azure networking services description: Learn how to use Azure networking services to enable remote work and how to mitigate traffic issues that result from an increased number of people who work remotely.---+ Previously updated : 03/26/2020- Last updated : 04/09/2023+ # Enable remote work by using Azure networking services
-This article describes the options that are available to organizations to set up remote access for their users or to supplement their existing solutions with additional capacity during periods of peak utilization.
+This article presents the different options available for organizations to establish remote access for their users. It also covers ways to supplement their existing solutions with extra capacity during periods of peak utilization.
Network architects are faced with the following challenges: - Address an increase in network utilization.+ - Provide reliable and secure connectivity to more employees of their company and customers.+ - Provide connectivity to remote locations across the globe. Not all networks (for example, private WAN and corporate core networks) experience congestion from peak loads of remote workers. The bottlenecks are commonly reported only in home broadband networks and in VPN gateways of on-premises networks of corporations. Network planners can help ease bottlenecks and alleviate network congestion by keeping in mind that different traffic types need different network treatment priorities. Some traffic requires smart load redirection or redistribution.
-For example, real-time telemedicine traffic of doctor/patient interaction has a high importance and is sensitive to delay or jitter. Replication of traffic between storage solutions is not delay sensitive. Telemedicine traffic must be routed via the most optimal network path with a high quality of service, whereas it's acceptable to use a suboptimal route for traffic between storage solutions.
+For example, real-time telemedicine traffic of doctor/patient interaction has a high importance and is sensitive to delay or jitter. Replication of traffic between storage solutions isn't delay sensitive. Telemedicine traffic must be routed via the most optimal network path with a high quality of service, whereas it's acceptable to use a suboptimal route for traffic between storage solutions.
## Elasticity and high availability in the Microsoft network
Azure is designed to withstand sudden changes in resource utilization and to kee
Microsoft maintains and operates one of the world's largest networks. The Microsoft network has been designed for high availability to withstand various types of failures, from failure of a single network element to failure of an entire region.
-The Microsoft network is also designed to handle various types of network traffic. This traffic can include delay-sensitive multimedia traffic for Skype and Teams, content delivery networks, real-time big data analysis, Azure Storage, Bing, and Xbox. To provide optimal performance, the Microsoft network attracts all the traffic that's destined to (or wanting to transit through) its resources as close as possible to the origin of the traffic.
+The Microsoft network is also designed to handle various types of network traffic. This traffic can include delay-sensitive multimedia traffic for Skype and Teams, content delivery networks, real-time big data analysis, Azure Storage, Bing, and Xbox. For optimal performance, Microsoft's network directs traffic intended for its resources or passing through them to be routed as close as possible to the traffic's point of origin.
>[!NOTE] >Using the Azure networking features described in this article takes advantage of the traffic attraction behavior of the Microsoft global network to provide a better networking experience for customers. The traffic attraction behavior of the Microsoft network helps offload traffic as soon as possible from the first-mile and last-mile networks that might experience congestion during periods of peak utilization.
To access your resources deployed in Azure, remote developers can use Azure Bast
You can use Azure Virtual WAN to: - Aggregate large-scale VPN connections.+ - Support any-to-any connections between resources in different on-premises global locations and in different regional hub-and-spoke virtual networks.+ - Optimize utilization of multiple home broadband networks. For more information, see [Azure Virtual WAN and supporting remote work](../virtual-wan/work-remotely-support.md).
-Another way to support a remote workforce is to deploy a virtual desktop infrastructure (VDI) hosted in your Azure virtual network, secured with Azure Firewall. For example, Azure Virtual Desktop is a desktop and app virtualization service that runs in Azure. With Virtual Desktop, you can set up a scalable and flexible environment in your Azure subscription without the need to run any additional gateway servers. You're responsible only for the Virtual Desktop virtual machines in your virtual network. For more information, see [Azure Firewall remote work support](../firewall/remote-work-support.md).
+Another way to support a remote workforce is to deploy a virtual desktop infrastructure (VDI) hosted in your Azure virtual network, secured with Azure Firewall. For example, Azure Virtual Desktop is a desktop and app virtualization service that runs in Azure. With Virtual Desktop, you can set up a scalable and flexible environment in your Azure subscription without the need to run any extra gateway servers. You're responsible only for the Virtual Desktop virtual machines in your virtual network. For more information, see [Azure Firewall remote work support](../firewall/remote-work-support.md).
Azure also has a rich set of ecosystem partners. Their network virtual appliances (NVAs) on Azure can also help scale VPN connectivity. For more information, see [NVA considerations for remote work](../vpn-gateway/nva-work-remotely-support.md).
Azure also has a rich set of ecosystem partners. Their network virtual appliance
The following Azure solutions can help enable employees to access your globally distributed resources. Your resources could be in any of the Azure regions, in on-premises networks, or even in other public or private clouds. -- **Azure virtual network peering**: If you deploy your resources in more than one Azure region or if you aggregate the connectivity of remotely working employees by using multiple virtual networks, you can establish connectivity between the virtual networks by using virtual network peering. For more information, see [Virtual network peering][VNet-peer].
+- **Azure virtual network peering**: You can connect virtual networks together by using virtual network peering. Virtual network peering is useful if your resources are in more than one Azure region or if you need to connect multiple virtual networks to support remote workers. For more information, see [Virtual network peering][VNet-peer].
-- **Azure VPN-based solution**: For your remote employees connected to Azure via P2S or S2S VPN, you can enable access to on-premises networks by configuring S2S VPN between your on-premises networks and Azure VPN Gateway. For more information, see [Create a site-to-site connection][S2S].
+- **Azure VPN-based solution**: For remote employees connected to Azure, you can provide them with access to your on-premises networks by establishing a S2S VPN connection. This connection is between your on-premises networks and Azure VPN Gateway. For more information, see [Create a site-to-site connection][S2S].
-- **Azure ExpressRoute**: By using ExpressRoute private peering, you can enable private connectivity between your Azure deployments and on-premises infrastructure or your infrastructure in a colocation facility. ExpressRoute, via Microsoft peering, also permits accessing public endpoints in Microsoft from your on-premises network.
+- **Azure ExpressRoute**: By using ExpressRoute private peering, you can enable private connectivity between your Azure deployments and on-premises infrastructure or your infrastructure in a colocation facility. ExpressRoute, via Microsoft peering, also permits accessing public endpoints at Microsoft from your on-premises network.
ExpressRoute connections don't go over the public internet. They offer secure connectivity, reliability, and higher throughput, with lower and more consistent latencies than typical connections over the internet. For more information, see [ExpressRoute overview][ExR].
The following articles discuss how you can use Azure networking features to scal
| **Article** | **Description** | | | |
-| [Remote work using Azure VPN Gateway point-to-site](../vpn-gateway/work-remotely-support.md) | Review available options to set up remote access for users or to supplement their existing solutions with additional capacity for your organization.|
+| [Remote work using Azure VPN Gateway point-to-site](../vpn-gateway/work-remotely-support.md) | Review available options to set up remote access for users or to supplement their existing solutions with extra capacity for your organization.|
| [Azure Virtual WAN and supporting remote work](../virtual-wan/work-remotely-support.md) | Use Azure Virtual WAN to address the remote connectivity needs of your organization.| | [Application Gateway high-traffic support](../application-gateway/high-traffic-support.md) | Use Azure Application Gateway with web application firewall (WAF) for a scalable and secure way to manage traffic to your web applications. | | [Working remotely: NVA considerations for remote work](../vpn-gateway/nva-work-remotely-support.md)|Review guidance about using NVAs in Azure to provide remote access solutions. |
operator-nexus Howto Configure Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md
Title: "Azure Operator Nexus: How to configure the L2 and L3 isolation-domains in Operator Nexus instances" description: Learn to create, view, list, update, delete commands for Layer 2 and Layer isolation-domains in Operator Nexus instances--++ Previously updated : 02/02/2023 #Required; mm/dd/yyyy format. Last updated : 04/02/2023 #Required; mm/dd/yyyy format.
You'll create isolation-domains to enable layer 2 and layer 3 connectivity betwe
## Parameters for isolation-domain management
-| Parameter| Description|
-| :--| :--|
-| vlan-id | VLAN identifier value. VLANs 1-500 are reserved and can't be used. The VLAN identifier value can't be changed once specified. The isolation-domain must be deleted and recreated if the VLAN identifier value needs to be modified. |
-| administrativeState | Indicate administrative state of the isolation-domain |
-| provisioningState | Indicates provisioning state |
+| Parameter|Description|Example|Required|
+|||||
+|resource-group |Use an appropriate resource group name specifically for ISD of your choice|ResourceGroupName|True
+|resource-name |Resource Name of the l2isolationDomain|example-l2domain| True
+|location|AODS Azure Region used during NFC Creation|eastus| True
+|nf-Id |network fabric ID|/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFname"| True
+|Vlan-id | VLAN identifier value. VLANs 1-500 are reserved and can't be used. The VLAN identifier value can't be changed once specified. The isolation-domain must be deleted and recreated if the VLAN identifier value needs to be modified. The range is between 501-4095|501| True
+|mtu | maximum transmission unit is 1500 by default, if not specified|1500|
+|administrativeState| Enable/Disable indicate the administrative state of the isolationDomain|Enable|
| subscriptionId | Your Azure subscriptionId for your Operator Nexus instance. |
-| resourceGroupName | Use the corresponding NFC resource group name |
-| resource-name | Resource Name of the isolation-domain |
-| nf-id | ARM ID of the Network fabric |
-| location | Azure region where the resource is being created |
+| provisioningState | Indicates provisioning state |
-## L2 isolation-domain
+## L2 Isolation-Domain
You use an L2 isolation-domain to establish layer 2 connectivity between workloads running on Operator Nexus compute nodes.
Create an L2 isolation-domain:
```azurecli az nf l2domain create \resource-group "NFresourcegroupname" \
+--resource-group "ResourceGroupName" \
--resource-name "example-l2domain" \ --location "eastus" \nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFname" \vlan-id 501\mtu 1500
+--nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFname" \
+--vlan-id 750\
+--mtu 1501
``` Expected output:
Expected output:
```json { "administrativeState": "Disabled",
- "annotation": null,
+ "annotation": null,user
"disabledOnResources": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
- "location": "eastus2euap",
- "mtu": 1500,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "location": "eastus",
+ "mtu": 1501,
"name": "example-l2domain", "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName", "provisioningState": "Succeeded",
- "resourceGroup": "NFresourcegroupname",
+ "resourceGroup": "ResourceGroupName",
"systemData": {
- "createdAt": "2022-11-02T05:59:00.534027+00:00",
+ "createdAt": "2023-XX-XXT14:57:59.167177+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2022-11-02T05:59:00.534027+00:00",
+ "lastModifiedAt": "2023-XX-XXT14:57:59.167177+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" }, "tags": null, "type": "microsoft.managednetworkfabric/l2isolationdomains",
- "vlanId": 501
+ "vlanId": 750
} ```
Expected output:
This command shows L2 isolation-domain details and administrative state of isolation-domain. ```azurecli
-az nf l2domain show --resource-group "resourcegroupname" --resource-name "example-l2domain"
+az nf l2domain show --resource-group "ResourceGroupName" --resource-name "example-l2domain"
``` Expected Output
Expected Output
"administrativeState": "Disabled", "annotation": null, "disabledOnResources": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
- "location": "eastus2euap",
- "mtu": 1500,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "location": "eastus",
+ "mtu": 1501,
"name": "example-l2domain",
- "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
"provisioningState": "Succeeded",
- "resourceGroup": "NFCresourcegroupname",
+ "resourceGroup": "ResourceGroupName",
"systemData": {
- "createdAt": "2022-11-02T05:59:00.534027+00:00",
+ "createdAt": "2023-XX-XXT14:57:59.167177+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2022-11-02T05:59:00.534027+00:00",
+ "lastModifiedAt": "2023-XX-XXT14:57:59.167177+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" }, "tags": null, "type": "microsoft.managednetworkfabric/l2isolationdomains",
- "vlanId": 2026
+ "vlanId": 750
} ```
Expected Output
This command lists all l2 isolation-domains available in resource group. ```azurecli
-az nf l2domain list --resource-group "resourcegroupname"
+az nf l2domain list --resource-group "ResourceGroupName"
``` Expected Output
Expected Output
"administrativeState": "Enabled", "annotation": null, "disabledOnResources": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
- "location": "eastus",
- "mtu": 1500,
- "name": "example-l2domain",
- "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
- "provisioningState": "Succeeded",
- "resourceGroup": "NFCresourcegroupname",
- "systemData": {
- "createdAt": "2022-10-24T22:26:33.065672+00:00",
- "createdBy": "email@address.com",
- "createdByType": "User",
- "lastModifiedAt": "2022-10-26T14:46:45.753165+00:00",
- "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
- "lastModifiedByType": "Application"
- },
- "tags": null,
- "type": "microsoft.managednetworkfabric/l2isolationdomains",
- "vlanId": 501
- },
- {
- "administrativeState": "Enabled",
- "annotation": null,
- "disabledOnResources": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
"location": "eastus",
- "mtu": 1500,
+ "mtu": 1501,
"name": "example-l2domain",
- "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxxxxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
"provisioningState": "Succeeded",
- "resourceGroup": "NFCresourcegroupname",
+ "resourceGroup": "ResourceGroupName",
"systemData": {
- "createdAt": "2022-10-27T03:03:15.099007+00:00",
+ "createdAt": "2022-XX-XXT22:26:33.065672+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2022-10-27T03:45:31.864152+00:00",
+ "lastModifiedAt": "2022-XX-XXT14:46:45.753165+00:00",
"lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7", "lastModifiedByType": "Application" }, "tags": null, "type": "microsoft.managednetworkfabric/l2isolationdomains",
- "vlanId": 501
- },
+ "vlanId": 750
+ }
``` ### Enable/disable L2 isolation-domain
Expected Output
This command is used to change the administrative state of the isolation-domain. **Note:**
-Only after the isolation-domain is Enabled, that the layer 2 isolation-domain configuration is pushed to the Network fabric devices.
+Only after the Isolation-Domain is Enabled, that the layer 2 Isolation-Domain configuration is pushed to the Network Fabric devices.
```azurecli
-az nf l2domain update-admin-state --resource-group "NFCresourcegroupname" --resource-name "example-l2domain" --state Enable/Disable
+az nf l2domain update-admin-state --resource-group "ResourceGroupName" --resource-name "example-l2domain" --state Enable/Disable
``` Expected Output
Expected Output
"administrativeState": "Enabled", "annotation": null, "disabledOnResources": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
- "location": "eastus2euap",
- "mtu": 1500,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "location": "eastus",
+ "mtu": 1501,
"name": "example-l2domain",
- "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
"provisioningState": "Succeeded",
- "resourceGroup": "NFCresourcegroupname",
+ "resourceGroup": "ResourceGroupName",
"systemData": {
- "createdAt": "2022-11-02T05:59:00.534027+00:00",
+ "createdAt": "2023-XX-XXT14:57:59.167177+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:01:03.552772+00:00",
+ "lastModifiedAt": "2023-XX-XXT14:57:59.167177+00:00",
"lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7", "lastModifiedByType": "Application" },
Expected Output
This command is used to delete L2 isolation-domain ```azurecli
-az nf l2domain delete --resource-group "resourcegroupname" --resource-name "example-l2domain"
+az nf l2domain delete --resource-group "ResourceGroupName" --resource-name "example-l2domain"
``` Expected output:
To make changes to the L3 isolation-domain, first Disable the L3 isolation-domai
- Make changes to the L3 isolation-domain - Re-enable the L3 isolation-domain
-Procedure to show, enable/disable and delete IPv6 based isolation-domains is same as used for IPv4.
+Procedure to show, enable/disable and delete IPv6 based isolation-domains is same as used for IPv4.
+Vlan range for creation Isolation Domain 501-4095
+
+| Parameter|Description|Example|Required|
+|||||
+|resource-group |Use an appropriate resource group name specifically for ISD of your choice|ResourceGroupName|True|
+|resource-name |Resource Name of the l3isolationDomain|example-l3domain|True|
+|location|AODS Azure Region used during NFC Creation|eastus|True|
+|nf-Id |azure subscriptionId used during NFC Creation|/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFName"| True|
### Create L3 isolation-domain You can create the L3 isolation-domain: ```azurecli
-az nf l3domain create
resource-group "NFCresourcegroupname"
+az nf l3domain create
+--resource-group "ResourceGroupName"
--resource-name "example-l3domain"location "eastus"nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFName"external '{"optionBConfiguration": {"importRouteTargets": ["1234:1235"], "exportRouteTargets": ["1234:1234"]}}'
+--location "eastus"
+--nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFName"
``` > [!NOTE]
Expected Output
```json { "administrativeState": "Disabled",
+ "aggregateRouteConfiguration": null,
"annotation": null,
+ "connectedSubnetRoutePolicy": null,
"description": null, "disabledOnResources": null,
- "external": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
- "internal": null,
- "location": "eastus2euap",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "location": "eastus",
"name": "example-l3domain",
- "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
"optionBDisabledOnResources": null, "provisioningState": "Accepted",
- "resourceGroup": "resourcegroupname",
+ "redistributeConnectedSubnets": "True",
+ "redistributeStaticRoutes": "False",
+ "resourceGroup": "ResourceGroupName",
"systemData": {
- "createdAt": "2022-11-02T06:23:43.372461+00:00",
- "createdBy": "email@address.com",
+ "createdAt": "2022-XX-XXT06:23:43.372461+00:00",
+ "createdBy": "email@example.com",
"createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:23:43.372461+00:00",
- "lastModifiedBy": "email@address.com",
+ "lastModifiedAt": "2023-XX-XXT09:40:38.815959+00:00",
+ "lastModifiedBy": "email@example.com",
"lastModifiedByType": "User" }, "tags": null,
Expected Output
You can get the L3 isolation-domains details and administrative state. ```azurecli
-az nf l3domain show --resource-group "resourcegroupname" --resource-name "example-l3domain"
+az nf l3domain show --resource-group "ResourceGroupName" --resource-name "example-l3domain"
``` Expected Output
Expected Output
```json { "administrativeState": "Disabled",
+ "aggregateRouteConfiguration": null,
"annotation": null,
+ "connectedSubnetRoutePolicy": null,
"description": null, "disabledOnResources": null,
- "external": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
- "internal": null,
- "location": "eastus2euap",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "location": "eastus",
"name": "example-l3domain",
- "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
"optionBDisabledOnResources": null,
- "provisioningState": "Accepted",
- "resourceGroup": "resourcegroupname",
+ "provisioningState": "Succeeded",
+ "redistributeConnectedSubnets": "True",
+ "redistributeStaticRoutes": "False",
+ "resourceGroup": "ResourceGroupName",
"systemData": {
- "createdAt": "2022-11-02T06:23:43.372461+00:00",
- "createdBy": "email@address.com",
+ "createdAt": "2023-XX-XXT09:40:38.815959+00:00",
+ "createdBy": "email@example.com",
"createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:23:43.372461+00:00",
- "lastModifiedBy": "email@address.com",
- "lastModifiedByType": "User"
+ "lastModifiedAt": "2023-XX-XXT09:40:46.923037+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
}, "tags": null, "type": "microsoft.managednetworkfabric/l3isolationdomains"
Expected Output
You can get a list of all L3 isolation-domains available in a resource group. ```azurecli
-az nf l3domain list --resource-group "resourcegroupname"
+az nf l3domain list --resource-group "ResourceGroupName"
``` Expected Output
Expected Output
```json { "administrativeState": "Disabled",
+ "aggregateRouteConfiguration": null,
"annotation": null,
+ "connectedSubnetRoutePolicy": null,
"description": null, "disabledOnResources": null,
- "external": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
- "internal": null,
- "location": "eastus2euap",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "location": "eastus",
"name": "example-l3domain",
- "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
"optionBDisabledOnResources": null, "provisioningState": "Succeeded",
- "resourceGroup": "resourcegroupname",
+ "redistributeConnectedSubnets": "True",
+ "redistributeStaticRoutes": "False",
+ "resourceGroup": "ResourceGroupName",
"systemData": {
- "createdAt": "2022-11-02T06:23:43.372461+00:00",
- "createdBy": "email@address.com",
+ "createdAt": "2023-XX-XXT09:40:38.815959+00:00",
+ "createdBy": "email@example.com",
"createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:23:43.372461+00:00",
- "lastModifiedBy": "email@address.com",
- "lastModifiedByType": "User"
+ "lastModifiedAt": "2023-XX-XXT09:40:46.923037+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
}, "tags": null, "type": "microsoft.managednetworkfabric/l3isolationdomains"
Expected Output
Once the isolation-domain is created successfully, the next step is to create an internal network.
-## Internal network creation
+## Optional parameters for Isolation Domain
+
+| Parameter|Description|Example|Required|
+|||||
+| redistributeConnectedSubnet | Advertise connected subnets default value is True |True | |
+| redistributeStaticRoutes |Advertise Static Routes can have value of true/False. Defualt Value is False | False | |
+| aggregateRouteConfiguration|List of Ipv4 and Ipv6 route configurations | | |
++
+## Internal Network Creation
+
+| Parameter|Description|Example|Required|
+|||||
+|vlan-Id |Vlan identifier with range from 501 to 4095|1001|True|
+|resource-group|Use the corresponding NFC resource group name| NFCresourcegroupname | True
+|l3-isolation-domain-name|Resource Name of the l3isolationDomain|example-l3domain | True
+|location|AODS Azure Region used during NFC Creation|eastus | True
-| Parameter | Description | Example | Required | type |
-| : | :-- | :- | :- | :-- |
-| vlanId | VLAN identifier | 1001 | True | string |
-| connectedIPv4Subnets/Prefix | IP subnet used by the HAKS cluster's workloads | 10.0.0.0/24 | True | string |
-| connectedIPv4Subnets/gateway | IPv4 subnet gateway used by the HAKS cluster's workloads | 10.0.0.1 | True | string |
-| staticIPv4Routes/Prefix | IPv4 Prefix of the static route | NA |
-| staticIPv4Routes/nexthop | IPv4 next hop address | NA |
-| defaultRouteOriginate | True/False "Enables default route to be originated when advertising routes via BGP" |
-| fabricASN | ASN of Network fabric | 65048 | True | string |
-| peerASN | Peer ASN of Network Function | 65047 | True | string |
-| IPv4Prefix | IPv4 Prefix of NFs for BGP peering (range).<br />The maximum length of the prefix is /28. For example, in 10.1.0.0/28, 10.1.0.0 to 10.1.0.7 are reserved and can't be used by workloads. 10.1.0.1 is assigned as VIP on both CEs. 10.1.0.2 is assigned on CE1 and 10.1.0.3 is assigned on CE2.<br />Workloads must peer to CE1 and CE2. The IP addresses of workloads can start from 10.0.0.8.<br />When only the prefix is configured, and `ipv4NeighborAddresses` isn't specified, the fabric configures the valid addresses in the prefix as part of the listen range. If `ipv4NeighborAddresses` is specified, the fabric configures the specified addresses as neighbors.<br />A smaller prefix than /28, for example /29 or /30 can also be configured | NA | |
+
+## Options to create Internal Networks
+
+|Parameter|Description|Example|Required|
+|||||
+|connectedIPv4Subnets |IPv4 subnet used by the HAKS cluster's workloads|10.0.0.0/24||
+|connectedIPv6Subnets |IPv6 subnet used by the HAKS cluster's workloads|10:101:1::1/64||
+|staticRouteConfiguration |IPv4 Prefix of the static route|10.0.0.0/24|
+|bgpConfiguration|IPv4 nexthop address|10.0.0.0/24| |
+|defaultRouteOriginate | True/False "Enables default route to be originated when advertising routes via BGP" | True | |
+|peerASN |Peer ASN of Network Function|65047||
+|allowAS |Allows for routes to be received and processed even if the router detects its own ASN in the AS-Path. Input as 0 is disable, Possible values are 1-10, default is 2.|2||
+|allowASOverride |Enable Or Disable allowAS|Enable||
+|ipv4ListenRangePrefixes| BGP IPv4 listen range, maximum range allowed in /28| 10.1.0.0/26 | |
+|ipv6ListenRangePrefixes| BGP IPv6 listen range, maximum range allowed in /127| 3FFE:FFFF:0:CD30::/126| |
+|ipv4ListenRangePrefixes| BGP IPv4 listen range, maximum range allowed in /28| 10.1.0.0/26 | |
+|ipv4NeighborAddress| IPv4 neighbor address|10.0.0.11| |
+|ipv6NeighborAddress| IPv6 neighbor address|10:101:1::11| |
This command creates an internal network with BGP configuration and specified peering address. **Note:** You need to create an internal network before you enable an L3 isolation-domain. ```azurecli
-az nf internalnetwork create \
resource-group "resourcegroupname" \l3-isolation-domain-name "example-l3domain" \resource-name "example-internalnetwork" \
+az nf internalnetwork create
+--resource-group "ResourceGroupName"
+--l3-isolation-domain-name "example-l3domain"
+--resource-name "example-internalnetwork"
--location "eastus"vlan-id 1001 \connected-ipv4-subnets '[{"prefix":"10.0.0.0/24", "gateway":"10.0.0.1"}]' \mtu 1500 \bgp-configuration '{"fabricASN": 65048, "defaultRouteOriginate":true, "peerASN": 65047 ,"ipv4NeighborAddress":[{"address": "10.0.0.11"}]}'
+--vlan-id 805
+--connected-ipv4-subnets '[{"prefix":"10.1.2.0/24"}]'
+--mtu 1500
+--bgp-configuration '{"defaultRouteOriginate": "True", "allowAS": 2, "allowASOverride": "Enable", "PeerASN": 65535, "ipv4ListenRangePrefixes": ["10.1.2.0/28"]}'
``` Expected Output ```json
-{
- "administrativeState": "Enabled",
- "annotation": null,
- "bfdDisabledOnResources": null,
- "bfdForStaticRoutesDisabledOnResources": null,
- "bgpConfiguration": {
- "annotation": null,
- "bfdConfiguration": null,
- "defaultRouteOriginate": false,
- "fabricAsn": 65048,
- "ipv4NeighborAddress": [
- {
- "address": "10.0.0.11",
- "operationalState": null
- }
- ],
- "ipv4Prefix": null,
- "ipv6NeighborAddress": null,
- "ipv6Prefix": null,
- "peerAsn": 65047
- },
- "bgpDisabledOnResources": null,
- "connectedIPv4Subnets": [
- {
- "annotation": null,
- "gateway": null,
- "prefix": "10.0.0.0/24"
- }
- ],
- "connectedIPv6Subnets": null,
- "disabledOnResources": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain/internalNetworks/example-internalnetwork",
- "mtu": 1500,
- "name": "example-internalnetwork",
- "provisioningState": "Succeeded",
- "resourceGroup": "resourcegroupname",
- "staticRouteConfiguration": null,
- "systemData": {
- "createdAt": "2022-11-02T06:25:05.983557+00:00",
- "createdBy": "email@address.com",
- "createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:25:05.983557+00:00",
- "lastModifiedBy": "email@address.com",
- "lastModifiedByType": "User"
- },
- "type": "microsoft.managednetworkfabric/l3isolationdomains/internalnetworks",
- "vlanId": 1001
+{
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "bfdDisabledOnResources": null,
+ "bfdForStaticRoutesDisabledOnResources": null,
+ "bgpConfiguration": {
+ "allowAs": 2,
+ "allowAsOverride": "Enable",
+ "annotation": null,
+ "bfdConfiguration": null,
+ "defaultRouteOriginate": "True",
+ "fabricAsn": 65046,
+ "ipv4ListenRangePrefixes": [
+ "10.1.2.0/28"
+ ],
+ "ipv4NeighborAddress": null,
+ "ipv6ListenRangePrefixes": null,
+ "ipv6NeighborAddress": null,
+ "peerAsn": 65535
+ },
+ "bgpDisabledOnResources": null,
+ "connectedIPv4Subnets": [
+ {
+ "annotation": null,
+ "prefix": "10.1.2.0/24"
+ }
+ ],
+ "connectedIPv6Subnets": null,
+ "disabledOnResources": null,
+ "exportRoutePolicyId": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "importRoutePolicyId": null,
+ "mtu": 1500,
+ "name": "internalnetwork805",
+ "provisioningState": "Accepted",
+ "resourceGroup": "ResourceGroupName",
+ "staticRouteConfiguration": null,
+ "systemData": {
+ "createdAt": "2023-XX-XXT05:26:33.547816+00:00",
+ "createdBy": "email@example.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT05:26:33.547816+00:00",
+ "lastModifiedBy": "email@example.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/internalnetworks",
+ "vlanId": 805
} ```
-**Note:** This command creates an Internal network where the BGP speakers of the NFs will be in the range 10.0.0.8 through 10.0.0.15
+## Multiple static routes with single next hop
```azurecli
-az nf internalnetwork create \
resource-group "resourcegroupname" \l3-isolation-domain-name "example-l3domain" \name "example-internalnetwork" \vlan-id 1000 \connected-ipv4-subnets '[{"prefix":"10.0.0.0/24", "gateway":"10.0.0.1"}]' \mtu 1500bgp-configuration '{"fabricASN": 65048, "defaultRouteOriginate":true, "peerASN": 5001 ,"ipv4Prefix": "10.0.0.0/28"}'
+az nf internalnetwork create
+--resource-name "example-internalnetwork"
+--l3domain "example-l3domain"
+--resource-group "ResourceGroupName"
+--location "eastus"
+--vlan-id "2028"
+--mtu "1500"
+--connected-ipv4-subnets '[{"prefix":"10.18.34.0/24","gateway":"10.18.34.2"}]' --bgp-configuration '{"defaultRouteOriginate":true,"peerASN":65510,"ipv4Prefix":"10.18.34.0/24"}'
+--static-route-configuration '{"ipv4Routes":[{"prefix":"10.23.0.0/19","nextHop":["10.20.0.1"]},{"prefix":"10.24.0.0/19","nextHop":["10.20.0.1"]}]}'
+
+```
+Expected Output
+```json
+{
+
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "bfdDisabledOnResources": null,
+ "bfdForStaticRoutesDisabledOnResources": null,
+ "bgpConfiguration": {
+ "allowAs": 2,
+ "allowAsOverride": "Enable",
+ "annotation": null,
+ "bfdConfiguration": null,
+ "defaultRouteOriginate": "True",
+ "fabricAsn": 65046,
+ "ipv4ListenRangePrefixes": null,
+ "ipv4NeighborAddress": null,
+ "ipv6ListenRangePrefixes": null,
+ "ipv6NeighborAddress": null,
+ "peerAsn": 65510
+ },
+
+ "bgpDisabledOnResources": null,
+ "connectedIPv4Subnets": [
+ {
+ "annotation": null,
+ "prefix": "10.18.34.0/24"
+ }
+ ],
+ "connectedIPv6Subnets": null,
+ "disabledOnResources": null,
+ "exportRoutePolicyId": null,
+ "id": "/subscriptions//xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx7/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain/internalNetworks/example-internalnetwor",
+ "importRoutePolicyId": null,
+ "mtu": 1500,
+ "name": "example-internalnetwork",
+ "provisioningState": "Accepted",
+ "resourceGroup": "ResourceGroupName",
+ "staticRouteConfiguration": {
+ "bfdConfiguration": null,
+ "ipv4Routes": [
+ {
+ "nextHop": [
+ "10.20.0.1"
+ ],
+ "prefix": "10.23.0.0/19"
+ },
+ {
+ "nextHop": [
+ "10.20.0.1"
+ ],
+ "prefix": "10.24.0.0/19"
+ }
+ ],
+ "ipv6Routes": null
+ },
+ "systemData": {
+ "createdAt": "2023-XX-XXT13:46:26.394343+00:00",
+ "createdBy": "email@example.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT13:46:26.394343+00:00",
+ "lastModifiedBy": "email@example.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/internalnetworks",
+ "vlanId": 2028
+}
``` ### Internal network creation using IPv6 ```azurecli
-az nf internalnetwork create \
resource-group "resourcegroupname" \l3-isolation-domain-name "example-l3domain" \resource-name "example-internalipv6network" \location "eastus"vlan-id 1090 \connected-ipv6-subnets '[{"prefix":"10:101:1::0/64", "gateway":"10:101:1::1"}]'
- --mtu 1500
- --bgp-configuration '{"fabricASN": 65048, "defaultRouteOriginate":true, "peerASN": 65020 ,"ipv6NeighborAddress":[{"address": "10:101:1::11"}]}
+az nf internalnetwork create
+--resource-group "ResourceGroupName"
+--l3-isolation-domain-name "example-l3domain"
+--resource-name "example-internalipv6network"
+--location "eastus"
+--vlan-id 1090
+--connected-ipv6-subnets '[{"prefix":"10:101:1::0/64", "gateway":"10:101:1::1"}]'
+--mtu 1500 --bgp-configuration '{"defaultRouteOriginate":true,"peerASN": 65020,"ipv6NeighborAddress":[{"address": "10:101:1::11"}]}'
``` Expected Output ```json
-{
- "administrativeState": "Enabled",
- "annotation": null,
- "bfdDisabledOnResources": null,
- "bfdForStaticRoutesDisabledOnResources": null,
- "bgpConfiguration": {
- "annotation": null,
- "bfdConfiguration": null,
- "defaultRouteOriginate": true,
- "fabricAsn": 65048,
- "ipv4NeighborAddress": null,
- "ipv4Prefix": null,
- "ipv6NeighborAddress": [
- {
- "address": "10:101:1::11",
- "operationalState": null
- }
- ],
- "ipv6Prefix": null,
- "peerAsn": 65020
- },
- "bgpDisabledOnResources": null,
- "connectedIPv4Subnets": null,
- "connectedIPv6Subnets": [
- {
- "annotation": null,
- "gateway": "10:101:1::1",
- "prefix": "10:101:1::0/64"
- }
- ],
- "disabledOnResources": null,
- "exportRoutePolicyId": null,
- "id": "/subscriptions//xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx7/resourceGroups/fab1nfrg121322/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/fab1-l3domain121822/internalNetworks/fab1-internalnetworkv16",
- "importRoutePolicyId": null,
- "mtu": 1500,
- "name": "example-internalipv6network",
- "provisioningState": "Succeeded",
- "resourceGroup": "resourcegroupname",
- "staticRouteConfiguration": null,
- "systemData": {
- "createdAt": "2022-12-15T12:10:34.364393+00:00",
- "createdBy": "email@address.com",
- "createdByType": "User",
- "lastModifiedAt": "2022-12-15T12:10:34.364393+00:00",
- "lastModifiedBy": "email@address.com",
- "lastModifiedByType": "User"
- },
- "type": "microsoft.managednetworkfabric/l3isolationdomains/internalnetworks",
- "vlanId": 1090
+{
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "bfdDisabledOnResources": null,
+ "bfdForStaticRoutesDisabledOnResources": null,
+ "bgpConfiguration": {
+ "allowAs": 2,
+ "allowAsOverride": "Enable",
+ "annotation": null,
+ "bfdConfiguration": null,
+ "defaultRouteOriginate": "True",
+ "fabricAsn": 65046,
+ "ipv4ListenRangePrefixes": null,
+ "ipv4NeighborAddress": null,
+ "ipv6ListenRangePrefixes": null,
+ "ipv6NeighborAddress": [
+ {
+ "address": "10:101:1::11",
+ "operationalState": "Disabled"
+ }
+ ],
+ "peerAsn": 65020
+ },
+ "bgpDisabledOnResources": null,
+ "connectedIPv4Subnets": null,
+ "connectedIPv6Subnets": [
+ {
+ "annotation": null,
+ "prefix": "10:101:1::0/64"
+ }
+ ],
+ "disabledOnResources": null,
+ "exportRoutePolicyId": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/l3domain2/internalNetworks/internalipv6network",
+ "importRoutePolicyId": null,
+ "mtu": 1500,
+ "name": "internalipv6network",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "ResourceGroupName",
+ "staticRouteConfiguration": null,
+ "systemData": {
+ "createdAt": "2023-XX-XXT10:34:33.933814+00:00",
+ "createdBy": "email@example.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT10:34:33.933814+00:00",
+ "lastModifiedBy": "email@example.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/internalnetworks",
+ "vlanId": 1090
} ```
Expected Output
This command creates an External network using Azure CLI.
-**Note:** For Option A, you need to create an external network before you enable the L3 isolation-domain.
-An external is dependent on Internal network, so an external can't be enabled without an internal network.
-The vlan-id value should be between 501 and 4095.
+|Parameter|Description|Example|Required|
+|||||
+|peeringOption |Peering using either optionA or optionb. Possible values OptionA and OptionB |OptionB| True|
+|optionBProperties | OptionB properties configuration. To specify use exportRouteTargets or importRouteTargets|"exportRouteTargets": ["1234:1234"]}}||
+|optionAProperties | Configuration of OptionA properties. Please refer to OptionA example in section below |||
+|external|This is an optional Parameter to input MPLS Option 10 (B) connectivity to external networks via PE devices. Using this Option, a user can Input Import and Export Route Targets as shown in the example| ||
+
+**Note:** For Option A You need to create an external network before you enable the L3 isolation Domain. An external is dependent on Internal network, so an external can't be enabled without an internal network. The vlan-id value should be between 501 and 4095.
+
+## External Network Creation using Option B
```azurecli
-az nf externalnetwork create
resource-group "resourcegroupname"l3-isolation-domain-name "example-l3domain"name "example-externalnetwork"location "eastus"vlan-id 515fabric-asn 65025peer-asn 65026primary-ipv4-prefix "10.1.1.0/30"secondary-ipv4-prefix "10.1.1.4/30"
+az nf externalnetwork create
+--resource-group "ResourceGroupName"
+--l3domain "examplel3domain"
+--resource-name "examplel3-externalnetwork"
+--location "eastus"
+--peering-option "OptionB" --option-b-properties '{"importRouteTargets": ["65541:2001"], "exportRouteTargets": ["65531:2001"]}'
``` Expected Output ```json {
- "administrativeState": null,
+ "administrativeState": "Enabled",
"annotation": null,
- "bfdConfiguration": null,
- "bfdDisabledOnResources": null,
- "bgpDisabledOnResources": null,
"disabledOnResources": null,
- "fabricAsn": 65025,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/l3domainNEWS3/externalNetworks/example-l3domain",
- "mtu": 0,
- "name": "example-l3domain",
- "peerAsn": 65026,
- "primaryIpv4Prefix": "10.1.1.0/30",
- "primaryIpv6Prefix": null,
+ "exportRoutePolicyId": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxxX/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/examplel3isolationdomain/externalNetworks/example-externalnetwork",
+ "importRoutePolicyId": null,
+ "name": "examplel3-externalnetwork",
+ "networkToNetworkInterconnectId": null,
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65531:2001"
+ ],
+ "importRouteTargets": [
+ "65541:2001"
+ ]
+ },
+ "peeringOption": "OptionB",
"provisioningState": "Succeeded",
- "resourceGroup": "resourcegroupname",
- "secondaryIpv4Prefix": "10.1.1.4/30",
- "secondaryIpv6Prefix": null,
+ "resourceGroup": "ResourceGroupName",
"systemData": {
- "createdAt": "2022-10-29T17:24:32.077026+00:00",
+ "createdAt": "2023-XX-XXT15:45:31.938216+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2022-11-07T09:28:18.873754+00:00",
+ "lastModifiedAt": "2023-XX-XXT15:45:31.938216+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" },
- "type": "microsoft.managednetworkfabric/l3isolationdomains/externalnetworks",
- "vlanId": 515
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/externalnetworks"
+}
+```
+## External Network creation with Option A
+
+```azurecli
+az nf externalnetwork create
+--resource-group "ResourceGroupName"
+--l3domain "example-l3domain"
+--resource-name "example-externalipv4network"
+--location "eastus" --peering-option "OptionA"
+--option-a-properties '{"peerASN": 65026,"vlanId": 2423, "mtu": 1500, "primaryIpv4Prefix": "10.18.0.148/30", "secondaryIpv4Prefix": "10.18.0.152/30"}'
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "disabledOnResources": null,
+ "exportRoutePolicyId": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxxX/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/examplel3isolationdomain/externalNetworks/example-externalnetwork",
+ "importRoutePolicyId": null,
+ "name": "example-externalipv4network",
+ "networkToNetworkInterconnectId": null,
+ "optionAProperties": {
+ "bfdConfiguration": null,
+ "fabricAsn": 65026,
+ "mtu": 1500,
+ "peerAsn": 65026,
+ "primaryIpv4Prefix": "10.18.0.148/30",
+ "primaryIpv6Prefix": null,
+ "secondaryIpv4Prefix": "10.18.0.152/30",
+ "secondaryIpv6Prefix": null,
+ "vlanId": 2423
+ },
+
+ "optionBProperties": null,
+ "peeringOption": "OptionA",
+ "provisioningState": "Accepted",
+ "resourceGroup": "ResourceGroupName",
+ "systemData": {
+ "createdAt": "2023-XX-XXT07:23:54.396679+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XX1T07:23:54.396679+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/externalnetworks"
}+ ``` ### External network creation using Ipv6 ```azurecli
-az nf externalnetwork create
resource-group " resourcegroupname "l3-isolation-domain-name " example-l3domain"resource-name " example-externalipv6network"location "westus3"vlan-id 516fabric-asn 65048
+az nf externalnetwork create
+--resource-group "ResourceGroupName"
+--l3-isolation-domain-name "example-l3domain"
+--resource-name "example-externalipv6network"
+--location "eastus"
+--vlan-id 506
--peer-asn 65022
- --primary-ipv4-prefix "10:101:2::0/127"
+--primary-ipv6-prefix "10:101:2::0/127"
--secondary-ipv6-prefix "10:101:3::0/127" ```
Expected Output
"bgpDisabledOnResources": null, "disabledOnResources": null, "exportRoutePolicyId": null,
- "fabricAsn": 65048,
- "id": "/subscriptions//xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/fab1nfrg121322/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/fab1-l3domain121822/externalNetworks/fab1-externalnetworkv6",
+ "fabricAsn": 65026,
+ "id": "/subscriptions//xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain/externalNetworks/example-externalipv6network",
"importRoutePolicyId": null, "mtu": 1500, "name": "example-externalipv6network",
Expected Output
"primaryIpv4Prefix": "10:101:2::0/127", "primaryIpv6Prefix": null, "provisioningState": "Succeeded",
- "resourceGroup": "resourcegroupname",
+ "resourceGroup": "ResourceGroupName",
"secondaryIpv4Prefix": null, "secondaryIpv6Prefix": "10:101:3::0/127", "systemData": {
- "createdAt": "2022-12-16T07:52:26.366069+00:00",
+ "createdAt": "2022-XX-XXT07:52:26.366069+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2022-12-16T07:52:26.366069+00:00",
+ "lastModifiedAt": "2022-XX-XXT07:52:26.366069+00:00",
"lastModifiedBy": "", "lastModifiedByType": "User" }, "type": "microsoft.managednetworkfabric/l3isolationdomains/externalnetworks",
- "vlanId": 516
+ "vlanId": 506
} ```
Expected Output
This command is used change administrative state of L3 isolation-domain, you have to run the az show command to verify if the Administrative state has changed to Enabled or not. ```azurecli
-az nf l3domain update-admin-state --resource-group "resourcegroupname" --resource-name "example-l3domain" --state Enable/Disable
+az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "example-l3domain" --state Enable/Disable
``` Expected Output ```json {
- "administrativeState": "Enabled",
- "annotation": null,
- "description": null,
- "disabledOnResources": null,
- "external": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
- "internal": null,
- "location": "eastus2euap",
- "name": "example-l3domain",
- "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
- "optionBDisabledOnResources": null,
- "provisioningState": "Succeeded",
- "resourceGroup": "resourcegroupname",
- "systemData": {
- "createdAt": "2022-11-02T06:23:43.372461+00:00",
- "createdBy": "email@address.com",
- "createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:25:53.240975+00:00",
- "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
- "lastModifiedByType": "Application"
- },
- "tags": null,
- "type": "microsoft.managednetworkfabric/l3isolationdomains"
-}
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "description": null,
+ "disabledOnResources": null,
+ "external": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "internal": null,
+ "location": "eastus",
+ "name": "example-l3domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "optionBDisabledOnResources": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroupName",
+ "systemData": {
+ "createdAt": "2022-XX-XXT06:23:43.372461+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-XX-XXT06:25:53.240975+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l3isolationdomains"
+ }
``` ### Delete L3 isolation-domains
Expected Output
This command is used to delete L3 isolation-domain ```azurecli
-az nf l3domain delete --resource-group "fab1-nf" --resource-name "example-l3domain"
+ az nf l3domain delete --resource-group "ResourceGroupName" --resource-name "example-l3domain"
``` Use the `show` or `list` commands to validate that the isolation-domain has been deleted.
Figure Network Function networking diagram
### Create required L3 isolation-domains
-**Create an L3 isolation-domain `l3untrust`**
+## Create L3 Isolation Untrust
```azurecli
-az nf l3domain create --resource-group "resourcegroupname" --resource-name "l3untrust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3untrust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
```-
-**Create an L3 isolation-domain `l3trust`**
+## Create L3 Isolation domain Trust
```azurecli
-az nf l3domain create --resource-group "resourcegroupname" --resource-name "l3trust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3trust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
```-
-**Create an L3 isolation-domain `l3mgmt`**
+## Create L3 Isolation domain Mgmt
```azurecli
-az nf l3domain create --resource-group "resourcegroupname" --resource-name "l3mgmt" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3mgmt" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
``` ### Create required internal networks
Now that the required L3 isolation-domains have been created, you can create the
- Management network: 10.151.2.11/24 - Untrusted network: 10.151.3.11/24
-**Create Internal Network in `l3untrust` L3 isolation-domain**
+## Internal Network Untrust L3 ISD
```azurecli
-az nf internalnetwork create --resource-group "resourcegroupname" --l3-isolation-domain-name l3untrust --resource-name untrustnetwork --location "eastus" --vlan-id 502 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.3.11/24" --mtu 1500
+az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3untrust --resource-name untrustnetwork --location "eastus" --vlan-id 502 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.3.11/24" --mtu 1500
```-
-**Create Internal Network in `l3trust` L3 isolation-domain**
+## Internal Network Trust ISD
```azurecli
-az nf internalnetwork create --resource-group "resourcegroupname" --l3-isolation-domain-name l3trust --resource-name trustnetwork --location "eastus" --vlan-id 503 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.1.11/24" --mtu 1500
+az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3trust --resource-name trustnetwork --location "eastus" --vlan-id 503 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.1.11/24" --mtu 1500
```-
-**Create Internal Network in `l3mgmt` L3 isolation-domain**
+## Internal Network Mgmt ISD
```azurecli
-az nf internalnetwork create --resource-group "resourcegroupname" --l3-isolation-domain-name l3mgmt --resource-name mgmtnetwork --location "eastus" --vlan-id 504 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.2.11/24" --mtu 1500
+az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3mgmt --resource-name mgmtnetwork --location "eastus" --vlan-id 504 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.2.11/24" --mtu 1500
```-
-### Enable the L3 isolation-domains
-
-You've created the required L3 isolation-domains and the associated internal network. You now need to enable these isolation-domains.
-
-**Enable L3 isolation-domain `l3untrust`**
+## Enable ISD Untrust
```azurecli
-az nf l3domain update-admin-state --resource-group "resourcegroupname" --resource-name "l3untrust" --state Enable
+az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3untrust" --state Enable
```-
-**Enable L3 isolation-domain `l3trust`**
+## Enable ISD Trust
```azurecli
-az nf l3domain update-admin-state --resource-group "resourcegroupname" --resource-name "l3trust" --state Enable
+az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3trust" --state Enable
```-
-**Enable L3 isolation-domain `l3mgmt`**
+## Enable ISD Mgmt
```azurecli
-az nf l3domain update-admin-state --resource-group "resourcegroupname" --resource-name "l3mgmt" --state Enable
+az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3mgmt" --state Enable
```
-## Example L2 isolation-domain creation for a workload
+#### Below example is used to create any L2 Isolation needed by workload
-First, you need to create the `l2HAnetwork` L2 isolation-domain and then enable it.
-
-**Create `l2HAnetwork` L2 isolation-domain**
+## L2 Isolation domain
```azurecli
-az nf l2domain create --resource-group "resourcegroupname" --resource-name "l2HAnetwork" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName" --vlan-id 505 --mtu 1500
+az nf l2domain create --resource-group "ResourceGroupName" --resource-name "l2HAnetwork" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName" --vlan-id 505 --mtu 1500
```-
-**Enable `l2HAnetwork` L2 isolation-domain**
+## Enable L2 Isolation Domain
```azurecli
-az nf l2domain update-administrative-state --resource-group "resourcegroupname" --resource-name "l2HAnetwork" --state Enable
+az nf l2domain update-administrative-state --resource-group "ResourceGroupName" --resource-name "l2HAnetwork" --state Enable
```
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
Title: "Azure Operator Nexus: How to configure the Network Fabric" description: Learn to create, view, list, update, delete commands for Network Fabric--++ Last updated 03/26/2023 #Required; mm/dd/yyyy format.
-# Create and provision a Network Fabric using Azure CLI
+# Create and Provision a Network Fabric using Azure CLI
This article describes how to create a Network Fabric by using the Azure Command Line Interface (AzCLI). This document also shows you how to check the status, update, or delete a Network Fabric. + ## Prerequisites
-* A Network Fabric Controller is successfully provisioned.
- * A Network Fabric Controller instance in Azure manages multiple Network Fabric Resources.
- * You can reuse a pre-existing Network Fabric Controller.
-* Physical infrastructure installed and cabled as per BOM.
-* ExpressRoute connectivity established between the Azure region and your WAN (your networking).
-* The needed VLANs, Route-Targets and IP addresses configured in your network.
-* Terminal Server [installed and configured](./howto-platform-prerequisites.md#set-up-terminal-server)
-
-## Parameters needed to create Network Fabric
-
-| Parameter | Description | Example | Required| Type|
-|--|| |-||
-| resource-group | Name of the resource group | "NFResourceGroup" |True | String |
-| location | Location of Azure region | "eastus" |True | String |
-| resource-name | Name of the FabricResource | NF-Lab1 |True | String |
-| nf-sku |Fabric SKU ID, based on the ordered SKU of the BoM. Contact AFO team for specific SKU value for the BoM | M8-A400-A100-C16-aa |True | String|
-| nfc-id |Network Fabric Controller ARM resource ID| |True | String |
-| rack-count |Total number of compute racks | 8 |True | Integer |
-| server-count-per-rack |Total number of worker nodes per rack| 16 |True | Integer |
-||
-|**managed-network-config**| Details of management network ||True ||
-|ipv4Prefix|IPv4 Prefix of the management network. This Prefix should be unique across all Network Fabrics in a Network Fabric Controller. Prefix length should be at least 19 (/20 not allowed, /18 and lower allowed) | 10.246.0.0/19|True | String |
-|ipv6Prefix|IPv6 Prefix of the management network. This Prefix should be unique across all Network Fabrics in a Network Fabric Controller. Prefix length should be at least 59 (/60 not allowed, /58 and lower allowed) | fd01:0:1234:00e0::/59|True | String |
-||
-|**managementVpnConfiguration**| Details of management VPN connection between Network Fabric and infrastructure services in Network Fabric Controller||True ||
-|*optionBProperties*| Details of MPLS option 10B used for connectivity between Network Fabric and Network Fabric Controller||True ||
-|importRouteTargets|Values of import route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B| 65048:10039|True(If OptionB enabled)|Integer |
-|exportRouteTargets|Values of export route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B| 65048:10039|True(If OptionB enabled)|Integer |
-||
-|**workloadVpnConfiguration**| Details of workload VPN connection between Network Fabric and workload services in Network Fabric Controller||||
-|*optionBProperties*| Details of MPLS option 10B used for connectivity between Network Fabric and Network Fabric Controller||||
-|importRouteTargets|Values of import route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B|for example, 65048:10050|True(If OptionB enabled)|Integer |
-|exportRouteTargets|Values of export route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B|for example, 65048:10050|True(If OptionB enabled)|Integer |
-||
-|**ts-config**| Terminal Server Configuration Details||True ||
-|primaryIpv4Prefix| The terminal server Net1 interface should be assigned the first usable IP from the prefix and the corresponding interface on PE should be assigned the second usable address|20.0.10.0/30, TS Net1 interface should be assigned 20.0.10.1 and PE interface 20.0.10.2|True|String |
-|secondaryIpv4Prefix|IPv4 Prefix for connectivity between TS and PE2. The terminal server Net2 interface should be assigned the first usable IP from the prefix and the corresponding interface on PE should be assigned the second usable address|20.0.0.4/30, TS Net2 interface should be assigned 20.0.10.5 and PE interface 20.0.10.6|True|String |
-|primaryIpv6Prefix| The terminal server Net1 interface should be assigned the first usable IP from the prefix and the corresponding interface on PE should be assigned the second usable address| TS Net1 interface should be assigned the next IP and PE interface the next IP |True|String |
-|secondaryIpv6Prefix|IPv6 Prefix for connectivity between TS and PE2. The terminal server Net2 interface should be assigned the first usable IP from the prefix and the corresponding interface on PE should be assigned the second usable address| TS Net2 interface should be assigned next IP and PE interface the next IP |True|String |
-|username| Username configured on the terminal server that the services use to configure TS||True|String|
-|password| Password configured on the terminal server that the services use to configure TS||True|String|
-||
-|**nni-config**| Network to Network Inter-connectivity configuration between CEs and PEs||True||
-|*layer2Configuration*| Layer 2 configuration ||||
-|portCount| Number of ports that are part of the port-channel. Maximum value is based on Fabric SKU|2||Integer|
-|mtu| Maximum transmission unit between CE and PE. |1500||Integer|
-|*layer3Configuration*| Layer 3 configuration between CEs and PEs||True||
-|primaryIpv4Prefix|IPv4 Prefix for connectivity between CE1 and PE1. CE1 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE1 should be assigned the second usable address|10.246.0.124/31, CE1 port-channel interface is assigned 10.246.0.125 and PE1 port-channel interface should be assigned 10.246.0.126||String|
-|secondaryIpv4Prefix|IPv4 Prefix for connectivity between CE2 and PE2. CE2 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE2 should be assigned the second usable address|10.246.0.128/31, CE2 port-channel interface should be assigned 10.246.0.129 and PE2 port-channel interface 10.246.0.130||String|
-|primaryIpv6Prefix|IPv6 Prefix for connectivity between CE1 and PE1. CE1 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE1 should be assigned the second usable address|10.246.0.124/31, CE1 port-channel interface is assigned 10.246.0.125 and PE1 port-channel interface should be assigned 10.246.0.126||String|
-|secondaryIpv6Prefix|IPv6 Prefix for connectivity between CE2 and PE2. CE2 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE2 should be assigned the second usable address|10.246.0.128/31, CE2 port-channel interface should be assigned 10.246.0.129 and PE2 port-channel interface 10.246.0.130||String|
-|FabricAsn|ASN number assigned on CE for BGP peering with PE|65048||Integer|
-|peerAsn|ASN number assigned on PE for BGP peering with CE. For iBGP between PE/CE, the value should be same as FabricAsn, for eBGP the value should be different from FabricAsn |65048|True|Integer|
-|vlan-id| VLAN identifier used for connectivity between PE/CE. The value should be between 10 to 20| 10-20||Integer|
-||
+* An Azure account with an active subscription.
+* Install the latest version of the CLI commands (2.0 or later). For information about installing the CLI commands, see [Install Azure CLI](./howto-install-cli-extensions.md)
+* A Network Fabric controller manages multiple Network Fabrics on the same Azure region.
+* Physical Operator-Nexus instance with cabling as per BoM.
+* Express Route connectivity between NFC and Operator-Nexus instances.
+* Terminal server pre-configured with username and password [installed and configured](./howto-platform-prerequisites.md#set-up-terminal-server)
+* PE devices pre-configured with necessary VLANs, Route-Targets and IP addresses.
+* Supported SKUs from NFA Release 1.5 and beyond for Fabric are **M4-A400-A100-C16-aa** and **M8-A400-A100-C16-aa**.
+ * M4-A400-A100-C16-aa - Up to four Compute Racks
+ * M8-A400-A100-C16-aa - Up to eight Compute Racks
+
+## Steps to Provision a Fabric & Racks
+
+* Create a Network Fabric by providing racks, server count, SKU & network configuration.
+* Create a Network to Network Interconnect by providing Layer2 & Layer 3 Parameters
+* Update the serial number in the networkDevice resource with the actual serial number on the device.
+* Configure the terminal server with the serial numbers of all the devices.
+* Provision the Network Fabric.
-## Create a Network Fabric
-Resource group must be created before Network Fabric creation. It's recommended to create a separate resource group for each Network Fabric. Resource group is created with the following command:
+## Fabric Configuration
+
+The following table specifies parameters used to create Network Fabric
+
+| Parameter | Description | Example | Required |
+|--|-||-|
+| resource-group | Name of the resource group | "NFResourceGroup" |True |
+| location | Operator-Nexus Azure region | "eastus" |True |
+| resource-name | Name of the FabricResource | NF-ResourceName |True |
+| nf-sku |Fabric SKU ID is the SKU of the ordered BoM. Two SKUs are supported (**M4-A400-A100-C16-aa** and **M8-A400-A100-C16-aa**). | M4-A400-A100-C16-aa |True | String|
+|nfc-id|Network Fabric Controller ARM resource id|/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName|True |
+|rackcount|Number of compute racks per fabric. Possible values are 2-8|8|True |
+|serverCountPerRack|Number of compute servers per rack. Possible values are 4, 8, 12 or 16|16|True |
+|ipv4Prefix|IPv4 Prefix of the management network. This Prefix should be unique across all Network Fabrics in a Network Fabric Controller. Prefix length should be at least 19 (/20 isn't allowed, /18 and lower are allowed) | 10.246.0.0/19|True |
+|ipv6Prefix|IPv6 Prefix of the management network. This Prefix should be unique across all Network Fabrics in a Network Fabric Controller. | 10:5:0:0::/59|True |
+|**management-network-config**| Details of management network ||True |
+|**infrastructureVpnConfiguration**| Details of management VPN connection between Network Fabric and infrastructure services in Network Fabric Controller||True
+|*optionBProperties*| Details of MPLS option 10B is used for connectivity between Network Fabric and Network Fabric Controller||True
+|importRouteTargets|Values of import route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B|e.g., 65048:10039|True(If OptionB enabled)|
+|exportRouteTargets|Values of export route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B|e.g., 65048:10039|True(If OptionB enabled)|
+|**workloadVpnConfiguration**| Details of workload VPN connection between Network Fabric and workload services in Network Fabric Controller||
+|*optionBProperties*| Details of MPLS option 10B is used for connectivity between Network Fabric and Network Fabric Controller||
+|importRouteTargets|Values of import route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B|e.g., 65048:10050|True(If OptionB enabled)|
+|exportRouteTargets|Values of export route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B|e.g., 65048:10050|True(If OptionB enabled)|
+|**ts-config**| Terminal Server Configuration Details||True
+|primaryIpv4Prefix| The terminal server Net1 interface should be assigned the first usable IP from the prefix and the corresponding interface on PE should be assigned the second usable address|20.0.10.0/30, TS Net1 interface should be assigned 20.0.10.1 and PE interface 20.0.10.2|True|
+|secondaryIpv4Prefix|IPv4 Prefix for connectivity between TS and PE2. The terminal server Net2 interface should be assigned the first usable IP from the prefix and the corresponding interface on PE should be assigned the second usable address|20.0.0.4/30, TS Net2 interface should be assigned 20.0.10.5 and PE interface 20.0.10.6|True|
+|username| Username configured on the terminal server that the services use to configure TS|username|True|
+|password| Password configured on the terminal server that the services use to configure TS|password|True|
+|serialNumber| Serial number of Terminal Server|SN of the Terminal Server||
++
+## Create a Network Fabric
+
+Resource group must be created before Network Fabric creation. It's recommended to create a separate resource group for each Network Fabric. Resource group can be created by the following command:
```azurecli az group create -n NFResourceGroup -l "East US" ```- Run the following command to create the Network Fabric: ```azurecli
-az nf fabric create \
resource-group "NFResourceGroupName" \+
+az nf fabric create \
+--resource-group "NFResourceGroupName"
--location "eastus" \ --resource-name "NFName" \ --nf-sku "NFSKU" \nfc-id ""/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName" \" \fabric-asn 65014 \ipv4-prefix 10.x.0.0/19 \ipv6-prefix fda0:d59c:da05::/59 \rack-count 8 \server-count-per-rack 16 \ts-config '{"primaryIpv4Prefix":"20.x.0.5/30","secondaryIpv4Prefix": "20.x.1.6/30","username":"*****", "password": "************", "serialNumber":"************"}' \managed-network-config '{"infrastructureVpnConfiguration":{"peeringOption":"OptionB","optionBProperties":{"importRouteTargets":["65014:10039"],"exportRouteTargets":["65014:10039"]}}, "workloadVpnConfiguration":{"peeringOption": "OptionB", "optionBProperties": {"importRouteTargets": ["65014:10050"], "exportRouteTargets": ["65014:10050"]}}}'
+--nfc-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName"
+--fabric-asn 65048
+--ipv4-prefix 10.2.0.0/19
+--ipv6-prefix fda0:d59c:da02::/59
+--rack-count 4
+--server-count-per-rack 8
+--ts-config '{"primaryIpv4Prefix":"20.0.1.0/30", "secondaryIpv4Prefix":"20.0.0.0/30", "username":"****", "password": "****", "serialNumber":"TerminalServerSerialNumber"}'
+--managed-network-config '{"infrastructureVpnConfiguration":{"peeringOption":"OptionB","optionBProperties":{"importRouteTargets":["65048:10039"],"exportRouteTargets":["65048:10039"]}}, "workloadVpnConfiguration":{"peeringOption": "OptionB", "optionBProperties": {"importRouteTargets": ["65048:10050"], "exportRouteTargets": ["65048:10050"]}}}'
+ ```
+> [!Note]
+> * if it's a four racks set up then the rack count would be 4
+> * if it's an eight rack set up then the rack count would be 8
Expected output:
-```json
+```output
{ "annotation": null,
- "fabricAsn": 65014,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName",
- "ipv4Prefix": "10.x.0.0/19",
- "ipv6Prefix": "fda0:d59c:da05::/59",
+ "fabricAsn": 65048,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "ipv4Prefix": "10.2.0.0/19",
+ "ipv6Prefix": "fda0:d59c:da02::/59",
"l2IsolationDomains": null, "l3IsolationDomains": null, "location": "eastus",
Expected output:
"optionAProperties": null, "optionBProperties": { "exportRouteTargets": [
- "65014:10039"
+ "65048:10039"
], "importRouteTargets": [
- "65014:10039"
+ "65048:10039"
] }, "peeringOption": "OptionB"
Expected output:
"optionAProperties": null, "optionBProperties": { "exportRouteTargets": [
- "65014:10050"
+ "65048:10050"
], "importRouteTargets": [
- "65014:10050"
+ "65048:10050"
] }, "peeringOption": "OptionB" } }, "name": "NFName",
- "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName",
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
"networkFabricSku": "NFSKU", "operationalState": null, "provisioningState": "Accepted",
- "rackCount": 8,
+ "rackCount": 3,
"racks": null,
- "resourceGroup": "NFResourceGroup",
+ "resourceGroup": "NFResourceGroupName",
"routerId": null,
- "serverCountPerRack": 16,
+ "serverCountPerRack": 7,
"systemData": {
- "createdAt": "2023-03-10T11:06:33.818069+00:00",
+ "createdAt": "2023-XX-X-6T12:52:11.769525+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2023-03-10T11:06:33.818069+00:00",
+ "lastModifiedAt": "2023-XX-XX-6T12:52:11.769525+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" },
Expected output:
"terminalServerConfiguration": { "networkDeviceId": null, "password": null,
- "primaryIpv4Prefix": "20.x.0.5/30",
+ "primaryIpv4Prefix": "20.0.1.0/30",
"primaryIpv6Prefix": null,
- "secondaryIpv4Prefix": "20.x.1.6/30",
+ "secondaryIpv4Prefix": "20.0.0.0/30",
"secondaryIpv6Prefix": null,
- "serialNumber": "xxxxxxxx",
- "username": "xxxxxxxx"
+ "serialNumber": "TerminalServerSerialNumber",
+ "username": "****"
}, "type": "microsoft.managednetworkfabric/networkfabrics" } ```-
-## List Network Fabric
+## show fabric
```azurecli
-az nf fabric list --resource-group "NFResourceGroup"
+az nf fabric show --resourcegroup "NFResourceGroupName" --resource-name "NFName"
```- Expected output:
-```json
-[
- {
- "annotation": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName",
- "ipv4Prefix": "10.x.0.0/19",
- "ipv6Prefix": "fda0:d59c:da05::/59",
+```output
+
+{
+ "annotation": null,
+ "fabricAsn": 65048,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "ipv4Prefix": "10.2.0.0/19",
+ "ipv6Prefix": "fda0:d59c:da02::/59",
"l2IsolationDomains": null, "l3IsolationDomains": null, "location": "eastus",
Expected output:
"optionAProperties": null, "optionBProperties": { "exportRouteTargets": [
- "65014:10039"
+ "65048:10039"
], "importRouteTargets": [
- "65014:10039"
+ "65048:10039"
] }, "peeringOption": "OptionB"
Expected output:
"optionAProperties": null, "optionBProperties": { "exportRouteTargets": [
- "65014:10050"
+ "65048:10050"
], "importRouteTargets": [
- "65014:10050"
+ "65048:10050"
] }, "peeringOption": "OptionB" } },
- "name": "NFName",
- "networkfabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName",
+ "name": "nffab1031623",
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
"networkFabricSku": "NFSKU", "operationalState": null, "provisioningState": "Succeeded",
- "rackCount": 8,
- "racks": null,
+ "rackCount": 3,
+ "racks": [
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-aggrack",
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack1",
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack2"
+ ],
"resourceGroup": "NFResourceGroup", "routerId": null,
- "serverCountPerRack": 16,
+ "serverCountPerRack": 7,
"systemData": {
- "createdAt": "2023-03-10T11:06:33.818069+00:00",
+ "createdAt": "2023-XX-XXT12:52:11.769525+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2023-03-10T11:06:33.818069+00:00",
- "lastModifiedBy": "email@address.com",
- "lastModifiedByType": "User"
+ "lastModifiedAt": "2023-XX-XXT12:53:02.504974+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
}, "tags": null, "terminalServerConfiguration": { "networkDeviceId": null, "password": null,
- "primaryIpv4Prefix": "20.x.0.5/30",
+ "primaryIpv4Prefix": "20.0.1.0/30",
"primaryIpv6Prefix": null,
- "secondaryIpv4Prefix": "20.x.1.6/30",
+ "secondaryIpv4Prefix": "20.0.0.0/30",
"secondaryIpv6Prefix": null,
- "serialNumber": "xxxxxxxx",
- "username": "xxxxxxxx"
+ "serialNumber": "TerminalServerSerialNumber",
+ "username": "****"
}, "type": "microsoft.managednetworkfabric/networkfabrics" }
-]
+
+```
+
+## List or Get Network Fabric
+
+```azurecli
+az nf fabric list --resource-group "NFResourceGroup"
+```
+
+Expected output:
+
+```output
+{
+ "annotation": null,
+ "fabricAsn": 65048,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "ipv4Prefix": "10.2.0.0/19",
+ "ipv6Prefix": "fda0:d59c:da02::/59",
+ "l2IsolationDomains": [Null],
+ "l3IsolationDomains": [Null],
+ "location": "eastus",
+ "managementNetworkConfiguration": {
+ "infrastructureVpnConfiguration": {
+ "administrativeState": "Enabled",
+ "networkToNetworkInterconnectId": null,
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10039"
+ ],
+ "importRouteTargets": [
+ "65048:10039"
+ ]
+ },
+ "peeringOption": "OptionB"
+ },
+ "workloadVpnConfiguration": {
+ "administrativeState": "Enabled",
+ "networkToNetworkInterconnectId": null,
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10050"
+ ],
+ "importRouteTargets": [
+ "65048:10050"
+ ]
+ },
+ "peeringOption": "OptionB"
+ }
+ },
+ "name": "NFName",
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
+ "networkFabricSku": "NFSKU",
+ "operationalState": "Provisioned",
+ "provisioningState": "Succeeded",
+ "rackCount": 3,
+ "racks": [
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-aggrack",
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack1",
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack2"
+ ],
+ "resourceGroup": "NFResourceGroup",
+ "routerId": null,
+ "serverCountPerRack": 7,
+ "systemData": {
+ "createdAt": "2023-XX-XXT12:52:11.769525+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT02:05:44.043591+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
+ },
+ "tags": null,
+ "terminalServerConfiguration": {
+ "networkDeviceId": null,
+ "password": null,
+ "primaryIpv4Prefix": "20.0.1.0/30",
+ "primaryIpv6Prefix": null,
+ "secondaryIpv4Prefix": "20.0.0.0/30",
+ "secondaryIpv6Prefix": null,
+ "serialNumber": "TerminalServerSerialNumber",
+ "username": "****"
+ },
+ "type": "microsoft.managednetworkfabric/networkfabrics"
+ }
```
-## Create NNI
+## NNI Configuration
+
+The following table specifies parameters used to create Network to Network Interconnect
+
-Upon creating Network Fabric, the next action is to create NNI.
-Run the following command to create the NNI:
+| Parameter | Description | Example | Required |
+|--|-||-|
+|isMangementType| Configuration to make NNI to be used for management of Fabric. Default value is true. Possible values are True/False |True|True
+|useOptionB| Configuration to enable optionB. Possible values are True/False |True|True
+||
+|*layer2Configuration*| Layer 2 configuration ||
+||
+|portCount| Number of ports that are part of the port-channel. Maximum value is based on Fabric SKU|2||
+|mtu| Maximum transmission unit between CE and PE. |1500||
+||
+|*layer3Configuration*| Layer 3 configuration between CEs and PEs||True
+||
+|primaryIpv4Prefix|IPv4 Prefix for connectivity between CE1 and PE1. CE1 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE1 should be assigned the second usable address|10.246.0.124/31, CE1 port-channel interface is assigned 10.246.0.125 and PE1 port-channel interface should be assigned 10.246.0.126||String
+|secondaryIpv4Prefix|IPv4 Prefix for connectivity between CE2 and PE2. CE2 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE2 should be assigned the second usable address|10.246.0.128/31, CE2 port-channel interface should be assigned 10.246.0.129 and PE2 port-channel interface 10.246.0.130||String
+|primaryIpv6Prefix|IPv6 Prefix for connectivity between CE1 and PE1. CE1 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE1 should be assigned the second usable address|3FFE:FFFF:0:CD30::a1 is assigned to CE1 and 3FFE:FFFF:0:CD30::a2 is assigned to PE1. Default value is 3FFE:FFFF:0:CD30::a0/126||String
+|secondaryIpv6Prefix|IPv6 Prefix for connectivity between CE2 and PE2. CE2 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE2 should be assigned the second usable address|3FFE:FFFF:0:CD30::a5 is assigned to CE2 and 3FFE:FFFF:0:CD30::a6 is assigned to PE2. Default value is 3FFE:FFFF:0:CD30::a4/126.||String
+|fabricAsn|ASN number assigned on CE for BGP peering with PE|65048||
+|peerAsn|ASN number assigned on PE for BGP peering with CE. For iBGP between PE/CE, the value should be same as fabricAsn, for eBGP the value should be different from fabricAsn |65048|True|
+|fabricAsn|ASN number assigned on CE for BGP peering with PE|65048||
+|vlan-Id|Vlan for NNI.Range is between 501-4095 |501||
+|importRoutePolicy|Details to import route policy.|||
+|exportRoutePolicy|Details to export route policy.|||
+||||
-```azurecl
-az nf nni create --resource-group "NFResourceGroup" \
+## Create a Network to Network Interconnect
+
+Resource group & Network Fabric must be created before Network to Network Interconnect creation.
++
+Run the following command to create the Network to Network Interconnect:
+
+```azurecli
+
+az nf nni create \
+--resource-group "NFResourceGroup" \
--location "eastus" \resource-name "NNIResourceName" \fabric "NFName" \
+--resource-name "NFNNIName" \
+--fabric "NFFabric" \
--is-management-type "True" \ --use-option-b "True" \layer2-configuration '{"portCount": 1, "mtu": 1500}' \layer3-configuration '{"peerASN": 65014, "vlanId": 683, "primaryIpv4Prefix": "10.x.0.124/30", "secondaryIpv4Prefix": "10.x.0.128/30", "primaryIpv6Prefix": "fda0:d59c:da0a:500::7c/127", "secondaryIpv6Prefix": "fda0:d59c:da0a:500::80/127"}'
+--layer2-configuration '{"portCount": 3, "mtu": 1500}' \
+--layer3-configuration '{"peerASN": 65048, "vlanId": 501, "primaryIpv4Prefix": "10.2.0.124/30", "secondaryIpv4Prefix": "10.2.0.128/30", "primaryIpv6Prefix": "10:2:0:124::400/127", "secondaryIpv6Prefix": "10:2:0:124::402/127"}'
+ ``` Expected output:
-```json
+```output
{ "administrativeState": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName/networkToNetworkInterconnects/NNIResourceName",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/nffab1031623/networkToNetworkInterconnects/NFNNIName",
"isManagementType": "True", "layer2Configuration": { "interfaces": null, "mtu": 1500,
- "portCount": 1
+ "portCount": 3
}, "layer3Configuration": { "exportRoutePolicyId": null, "fabricAsn": null, "importRoutePolicyId": null,
- "peerAsn": 65014,
- "primaryIpv4Prefix": "10.x.0.124/30",
- "primaryIpv6Prefix": "fda0:d59c:da0a:500::7c/127",
- "secondaryIpv4Prefix": "10.x.0.128/30",
- "secondaryIpv6Prefix": "fda0:d59c:da0a:500::80/127",
- "vlanId": 683
+ "peerAsn": 65048,
+ "primaryIpv4Prefix": "10.2.0.124/30",
+ "primaryIpv6Prefix": "10:2:0:124::400/127",
+ "secondaryIpv4Prefix": "10.2.0.128/30",
+ "secondaryIpv6Prefix": "10:2:0:124::402/127",
+ "vlanId": 501
},
- "name": "NNIResourceName",
+ "name": "NFNNIName",
"provisioningState": "Succeeded", "resourceGroup": "NFResourceGroup", "systemData": {
- "createdAt": "2023-03-10T13:35:45.952324+00:00",
+ "createdAt": "2023-XX-XXT13:13:22.514644+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2023-03-10T13:35:45.952324+00:00",
+ "lastModifiedAt": "2023-XX-XXT13:13:22.514644+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" }, "type": "microsoft.managednetworkfabric/networkfabrics/networktonetworkinterconnects", "useOptionB": "True"
-}
+ ```
-Once NNI created, NFA creates the corresponding Device resources.
+## Show Network Fabric NNI (Network to Network Interface)
-## Next steps
+```azurecli
+az nf nni show -g "NFResourceGroup" --resource-name "NFNNIName" --fabric "NFFabric"
-* Update the serial number in the Device resource with the actual serial number on the device. The device sends the serial number as part of DHCP request.
-* Configure the terminal server with the serial numbers of all the Devices (which also hosts DHCP server)
-* Provision the Device via zero-touch provisioning mode. Based on the serial number in the DHCP request, the DHCP server responds with the boot configuration file for the corresponding Device
+```
-## Update Network Fabric Device
+Expected output:
+
+```output
+{
+ "administrativeState": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFFabric/networkToNetworkInterconnects/NFNNIName",
+ "isManagementType": "True",
+ "layer2Configuration": {
+ "interfaces": null,
+ "mtu": 1500,
+ "portCount": 3
+ },
+ "layer3Configuration": {
+ "exportRoutePolicyId": null,
+ "fabricAsn": null,
+ "importRoutePolicyId": null,
+ "peerAsn": 65048,
+ "primaryIpv4Prefix": "10.2.0.124/30",
+ "primaryIpv6Prefix": "10:2:0:124::400/127",
+ "secondaryIpv4Prefix": "10.2.0.128/30",
+ "secondaryIpv6Prefix": "10:2:0:124::402/127",
+ "vlanId": 501
+ },
+ "name": "NFNNIName",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroup",
+ "systemData": {
+ "createdAt": "2023-XX-XXT13:13:22.514644+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XX13:13:22.514644+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/networkfabrics/networktonetworkinterconnects",
+ "useOptionB": "True"
+```
-Run the following command to update Device with required details:
++
+## List or Get Network Fabric NNI (Network to Network Interface)
```azurecli
-az nf device update \
resource-group "NFResourceGroup" \location "eastus" \resource-name "network-device-name" \host-name "NFName-CR2-TOR1" \serial-number "12345"
+az nf nni list -g NFResourceGroup --fabric NFFabric
``` Expected output:
-```json
+```output
+{
+ "administrativeState": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFFabric/networkToNetworkInterconnects/NFNNIName",
+ "isManagementType": "True",
+ "layer2Configuration": {
+ "interfaces": null,
+ "mtu": 1500,
+ "portCount": 3
+ },
+ "layer3Configuration": {
+ "exportRoutePolicyId": null,
+ "fabricAsn": null,
+ "importRoutePolicyId": null,
+ "peerAsn": 65048,
+ "primaryIpv4Prefix": "10.2.0.124/30",
+ "primaryIpv6Prefix": "10:2:0:124::400/127",
+ "secondaryIpv4Prefix": "10.2.0.128/30",
+ "secondaryIpv6Prefix": "10:2:0:124::402/127",
+ "vlanId": 501
+ },
+ "name": "NFNNIName",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroup",
+ "systemData": {
+ "createdAt": "2023-XX-XXT13:13:22.514644+00:00",
+ "createdBy": "email@address.com.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT13:13:22.514644+00:00",
+ "lastModifiedBy": "email@address.com.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/networkfabrics/networktonetworkinterconnects",
+ "useOptionB": "True"
+ }
+```
++++
+## Next Steps
+
+* Update the serial number in the networkDevice resource with the actual serial number on the device. The device sends the serial number as part of DHCP request.
+* Configure the terminal server with the serial numbers of all the devices (which also hosts DHCP server)
+* Provision the network devices via zero-touch provisioning mode, Based on the serial number in the DHCP request, the DHCP server responds with the boot configuration file for the corresponding device
++
+## Update Network Fabric Devices
+
+Run the following command to update Network Fabric Devices:
+
+```azurecli
+
+az nf device update \
+--resource-group "NFResourceGroup" \
+--resource-name "Network-Device-Name" \
+--location "eastus" \
+--serial-number "xxxx"
+
+```
+
+Expected output:
+
+```output
{ "annotation": null,
- "hostName": "NFName-CR2-TOR1",
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/rgName/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-CR2-TOR1",
- "location": "eastus",
- "name": "networkDevice1",
- "networkRackId": null,
+ "hostName": "AggrRack-CE01",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkDevices/Network-Device-Name",
+ "location": "eastus2euap",
+ "name": "Network-Device-Name",
+ "networkDeviceRole": "CE1",
+ "networkDeviceSku": "DefaultSku",
+ "networkRackId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/Network-Device-Name",
"provisioningState": "Succeeded",
- "resourceGroup": "NFResourceGroupName",
- "serialNumber": "Arista;DCS-7010TX-48;12.00;JPE12345678",
+ "resourceGroup": "NFResourceGroup",
+ "serialNumber": "AXXXX;DCS-XXXXX-24;XX.XX;JXXXXXXX",
"systemData": {
- "createdAt": "2022-10-26T09:30:14.424546+00:00",
+ "createdAt": "2023-XX-XXT12:52:42.270551+00:00",
"createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7", "createdByType": "Application",
- "lastModifiedAt": "2022-10-31T15:45:24.320290+00:00",
+ "lastModifiedAt": "2023-XX-XXT13:30:24.098335+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" },
Expected output:
"version": null } ```
+> [!Note]
+> The above snapshot only serves as an example. You should update all the devices that are part of both AggRack and computeRacks.
-## List Network Fabric Device
+For example, AggRack consists of
+* CE01
+* CE02
+* TOR17
+* TOR18
+* Mgmnt Switch01
+* Mgmnt Switch02 and etc.
-Run the following command to list Device:
+## List or Get Network Fabric Devices
+
+Run the following command to List Network Fabric Devices:
```azurecli az nf device list --resource-group "NFResourceGroup"
az nf device list --resource-group "NFResourceGroup"
Expected output:
-```json
+```output
{ "annotation": null,
- "hostName": "NFName-CR1-TOR1",
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/rgName/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-CR1-TOR1",
+ "hostName": "AggrRack-CE01",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AggrRack-CE1",
"location": "eastus",
- "name": "networkDevice1",
- "networkRackId": null,
+ "name": "Network-Device-Name",
+ "networkDeviceRole": "CE1",
+ "networkDeviceSku": "DefaultSku",
+ "networkRackId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/Network-Device-Name",
"provisioningState": "Succeeded",
- "resourceGroup": "NFResourceGroupName",
- "serialNumber": "Arista;DCS-7280DR3-24;12.05;JPE12345678",
+ "resourceGroup": "NFResourceGroup",
+ "serialNumber": "ArXXX;DCS-7XXXXXX-24;12.05;JPXXXXXXXX",
"systemData": {
- "createdAt": "2022-10-20T17:23:49.203745+00:00",
+ "createdAt": "2023-XX-XXT12:52:42.270551+00:00",
"createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7", "createdByType": "Application",
- "lastModifiedAt": "2022-10-27T17:38:57.438007+00:00",
+ "lastModifiedAt": "2023-XX-XXT13:30:24.098335+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" },
Expected output:
}, { "annotation": null,
- "hostName": "NFName-CR1-MgmtSwitch",
- "id": "subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/rgName/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-CR1-MgmtSwitch",
+ "hostName": "AggrRack-CE02",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AggrRack-CE2",
"location": "eastus",
- "name": "Network device",
- "networkRackId": null,
+ "name": "Network-Device-Name",
+ "networkDeviceRole": "CE2",
+ "networkDeviceSku": "DefaultSku",
+ "networkRackId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/Network-Device-Name",
"provisioningState": "Succeeded",
- "resourceGroup": "NFResourceGroupName",
- "serialNumber": "Arista;DCS-7010TX-48;12.02;JPE12345678",
+ "resourceGroup": "NFResourceGroup",
+ "serialNumber": "ArXXX;DCS-7XXXXXX-24;12.05;JPXXXXXXXX",
"systemData": {
- "createdAt": "2022-10-27T17:23:53.581927+00:00",
+ "createdAt": "2023-XX-XXT12:52:43.489256+00:00",
"createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7", "createdByType": "Application",
- "lastModifiedAt": "2022-10-27T17:38:59.922499+00:00",
+ "lastModifiedAt": "2023-XX-XXT13:30:40.923567+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkdevices",
+ "version": null
+ },
+ {
+ "annotation": null,
+ "hostName": "AggRack-TOR17",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AggrRack-TOR17",
+ "location": "eastus2euap",
+ "name": "Network-Device-Name",
+ "networkDeviceRole": "TOR17",
+ "networkDeviceSku": "DefaultSku",
+ "networkRackId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/Network-Device-Name",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroup",
+ "serialNumber": "ArXXX;DCS-7XXXXXX-24;12.05;JPXXXXXXXX",
+ "systemData": {
+ "createdAt": "2023-XX-XXT12:52:44.676759+00:00",
+ "createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "createdByType": "Application",
+ "lastModifiedAt": "2023-XX-XXT13:31:59.650758+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkdevices",
+ "version": null
+ },
+ {
+ "annotation": null,
+ "hostName": "AggRack-TOR18",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AggrRack-TOR18",
+ "location": "eastus",
+ "name": "Network-Device-Name",
+ "networkDeviceRole": "TOR18",
+ "networkDeviceSku": "DefaultSku",
+ "networkRackId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/Network-Device-Name",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroup",
+ "serialNumber": "ArXXX;DCS-7XXXXXX-24;12.05;JPXXXXXXXX",
+ "systemData": {
+ "createdAt": "2023-03-16T12:52:45.801778+00:00",
+ "createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "createdByType": "Application",
+ "lastModifiedAt": "2023-XX-XXT13:32:13.369591+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkdevices",
+ "version": null
+ },
+ {
+ "annotation": null,
+ "hostName": "AggRack-MGMT1",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AggrRack-MgmtSwitch1",
+ "location": "eastus",
+ "name": "Network-Device-Name",
+ "networkDeviceRole": "MgmtSwitch1",
+ "networkDeviceSku": "DefaultSku",
+ "networkRackId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/Network-Device-Name",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroup",
+ "serialNumber": "ArXXX;DCS-7XXXXXX-24;12.05;JPXXXXXXXX",
+ "systemData": {
+ "createdAt": "2023-XX-XXT12:52:46.911202+00:00",
+ "createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "createdByType": "Application",
+ "lastModifiedAt": "2023-XX-XXT13:31:00.836730+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkdevices",
+ "version": null
+ },
+ {
+ "annotation": null,
+ "hostName": "AggRack-MGMT2",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AggrRack-MgmtSwitch2",
+ "location": "eastus",
+ "name": "Network-Device-Name",
+ "networkDeviceRole": "MgmtSwitch2",
+ "networkDeviceSku": "DefaultSku",
+ "networkRackId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/Network-Device-Name",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroup",
+ "serialNumber": "ArXXX;DCS-7XXXXXX-24;12.05;JPXXXXXXXX",
+ "systemData": {
+ "createdAt": "2023-XX-XXT12:52:48.020528+00:00",
+ "createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "createdByType": "Application",
+ "lastModifiedAt": "2023-XX-XXT13:31:42.173645+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" },
Expected output:
"version": null } ```-
-Run the following command to show details of a Device:
+Run the following command to Get or Show details of a Network Fabric Device:
```azurecli
-az nf device show --resource-group "example-rg" --resource-name "example-device"
+az nf device show --resource-group "NFResourceGroup" --resource-name "Network-Device-Name"
``` Expected output:
-```json
+```output
{ "annotation": null,
- "hostName": "NFName-CR1-TOR1",
- "id": "subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/rgName/providers/Microsoft.ManagedNetworkfabric/networkDevices/networkDevice1",
+ "hostName": "AggrRack-CE01",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AggrRack-CE1",
"location": "eastus",
- "name": "networkDevice1",
- "networkRackId": null,
+ "name": "Network-Device-Name",
+ "networkDeviceRole": "CE1",
+ "networkDeviceSku": "DefaultSku",
+ "networkRackId": "/subscriptions/61065ccc-9543-4b91-b2d1-0ce42a914507/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/Network-Device-Name",
"provisioningState": "Succeeded",
- "resourceGroup": "NFResourceGroupName",
- "serialNumber": "Arista;DCS-7280DR3-24;12.05;JPE12345678",
+ "resourceGroup": "NFResourceGroup",
+ "serialNumber": "AXXXX;DCS-XXXXX-24;XX.XX;JXXXXXXX",
"systemData": {
- "createdAt": "2022-10-27T17:23:49.203745+00:00",
+ "createdAt": "2023-XX-XXT12:52:42.270551+00:00",
"createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7", "createdByType": "Application",
- "lastModifiedAt": "2022-10-27T17:38:57.438007+00:00",
+ "lastModifiedAt": "2023-XX-XXT13:30:24.098335+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" },
Expected output:
} ```
-## Provision Fabric
-Once the Device serial number is updated, the Network Fabric needs to be provisioned by executing the following command
+## Provision fabric
+
+After updating the device serial number, the fabric needs to be provisioned by executing the following command
```azurecli
-az nf fabric provision --resource-group "NFResourceGroup" --resource-name "NFName"
+az nf fabric provision --resource-group "NFResourceGroup" --resource-name "NFName"
``` ```azurecli
-az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
+az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
``` Expected output:
-```json
+```output
{ "annotation": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName",
+ "fabricAsn": 65048,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "ipv4Prefix": "10.2.0.0/19",
+ "ipv6Prefix": "fda0:d59c:da02::/59",
"l2IsolationDomains": null, "l3IsolationDomains": null, "location": "eastus", "managementNetworkConfiguration": {
- "ipv4Prefix": "10.x.0.0/19",
- "ipv6Prefix": null,
- "managementVpnConfiguration": {
+ "infrastructureVpnConfiguration": {
+ "administrativeState": "Enabled",
+ "networkToNetworkInterconnectId": null,
"optionAProperties": null, "optionBProperties": { "exportRouteTargets": [
Expected output:
"65048:10039" ] },
- "peeringOption": "OptionA",
- "state": "Enabled"
+ "peeringOption": "OptionB"
}, "workloadVpnConfiguration": {
+ "administrativeState": "Enabled",
+ "networkToNetworkInterconnectId": null,
"optionAProperties": null, "optionBProperties": { "exportRouteTargets": [
Expected output:
"65048:10050" ] },
- "peeringOption": "OptionA",
- "state": "Enabled"
+ "peeringOption": "OptionB"
} }, "name": "NFName",
- "networkfabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName",
- "networkfabricSku": "NFSKU",
- "networkToNetworkInterconnect": {
- "layer2Configuration": null,
- "layer3Configuration": {
- "fabricAsn": 65048,
- "peerAsn": 65048,
- "primaryIpv4Prefix": "10.x.0.124/30",
- "primaryIpv6Prefix": null,
- "routerId": null,
- "secondaryIpv4Prefix": "10.x.0.128/30",
- "secondaryIpv6Prefix": null,
- "vlanId": 20
- }
- },
- "operationalState": "Provisioned",
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
+ "networkFabricSku": "NFSKU",
+ "operationalState": "Provisioning",
"provisioningState": "Succeeded",
+ "rackCount": 3,
"racks": [
- "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkRacks/AggRack"
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-aggrack",
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack1",
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack2"
], "resourceGroup": "NFResourceGroup",
+ "routerId": null,
+ "serverCountPerRack": 7,
"systemData": {
- "createdAt": "2022-11-02T06:56:05.019873+00:00",
- "createdBy": "email@adddress.com",
+ "createdAt": "2023-XX-XXT12:52:11.769525+00:00",
+ "createdBy": "email@address.com",
"createdByType": "User",
- "lastModifiedAt": "2022-11-02T09:12:58.889552+00:00",
+ "lastModifiedAt": "2023-XX-XXT14:47:59.424826+00:00",
"lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7", "lastModifiedByType": "Application" },
Expected output:
"terminalServerConfiguration": { "networkDeviceId": null, "password": null,
- "primaryIpv4Prefix": "20.x.10.0/30",
+ "primaryIpv4Prefix": "20.0.1.0/30",
"primaryIpv6Prefix": null,
- "secondaryIpv4Prefix": "20.x.10.4/30",
+ "secondaryIpv4Prefix": "20.0.0.0/30",
"secondaryIpv6Prefix": null,
- "****": "****"
+ "serialNumber": "XXXXXXXXXXXX",
+ "username": "XXXX"
}, "type": "microsoft.managednetworkfabric/networkfabrics" } ```
-## Deleting Network Fabric
-
-To delete the Network Fabric, the operational state of shouldn't be `Provisioned`. To change the operational state from `Provisioned`, run the `deprovision` command.
+## Deprovision a Fabric
+To deprovision a fabric ensure Fabric operational state should be in provisioned state
```azurecli
-az nf fabric deprovision --resource-group "NFResourceGroup" --resource-name "NFName"
+az nf fabric deprovision --resource-group "NFResourceGroup" --resource-name "NFName"
+ ``` Expected output:
-```json
+```output
{
- "annotation": null,
- "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName",
- "l2IsolationDomains": null,
- "l3IsolationDomains": null,
- "location": "eastus",
- "managementNetworkConfiguration": {
- "ipv4Prefix": "10.x.0.0/19",
- "ipv6Prefix": null,
- "managementVpnConfiguration": {
- "optionAProperties": null,
- "optionBProperties": {
- "exportRouteTargets": [
- "65048:10039"
- ],
- "importRouteTargets": [
- "65048:10039"
- ]
- },
- "peeringOption": "OptionA",
- "state": "Enabled"
- },
- "workloadVpnConfiguration": {
- "optionAProperties": null,
- "optionBProperties": {
- "exportRouteTargets": [
- "65048:10050"
- ],
- "importRouteTargets": [
- "65048:10050"
- ]
- },
- "peeringOption": "OptionA",
- "state": "Enabled"
- }
- },
- "name": "NFName",
- "networkfabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName",
- "networkfabricSku": "NFSKU",
- "networkToNetworkInterconnect": {
- "layer2Configuration": null,
- "layer3Configuration": {
- "fabricAsn": 65048,
- "peerAsn": 65048,
- "primaryIpv4Prefix": "10.x.0.124/30",
- "primaryIpv6Prefix": null,
- "routerId": null,
- "secondaryIpv4Prefix": "10.x.0.128/30",
- "secondaryIpv6Prefix": null,
- "vlanId": 20
- }
- },
- "operationalState": null,
- "provisioningState": "deprovisioned",
- "racks":["/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkRacks/AggRack".
- "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkRacks/CompRack1,
- "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkRacks/CompRack2]
- "resourceGroup": "NFResourceGroup",
- "systemData": {
- "createdAt": "2022-11-02T06:56:05.019873+00:00",
- "createdBy": "email@adddress.com",
- "createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:56:05.019873+00:00",
- "lastModifiedBy": "email@adddress.com",
- "lastModifiedByType": "User"
- },
- "tags": null,
- "terminalServerConfiguration": {
- "networkDeviceId": null,
- "password": null,
- "primaryIpv4Prefix": "20.x.10.0/30",
- "primaryIpv6Prefix": null,
- "secondaryIpv4Prefix": "20.x.10.4/30",
- "secondaryIpv6Prefix": null,
- "****": "root"
- },
- "type": "microsoft.managednetworkfabric/networkfabrics"
+  "annotation": null,
+  "fabricAsn": 65046,
+  "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+  "ipv4Prefix": "10.18.0.0/19",
+  "ipv6Prefix": null,
+  "l2IsolationDomains": [],
+  "l3IsolationDomains": null,
+  "location": "eastus",
+  "managementNetworkConfiguration": {
+    "infrastructureVpnConfiguration": {
+      "administrativeState": "Enabled",
+      "networkToNetworkInterconnectId": null,
+      "optionAProperties": null,
+      "optionBProperties": {
+        "exportRouteTargets": [
+          "65048:10039"
+        ],
+        "importRouteTargets": [
+          "65048:10039"
+        ]
+      },
+      "peeringOption": "OptionB"
+    },
+    "workloadVpnConfiguration": {
+      "administrativeState": "Enabled",
+      "networkToNetworkInterconnectId": null,
+      "optionAProperties": null,
+      "optionBProperties": {
+        "exportRouteTargets": [
+          "65048:10050"
+        ],
+        "importRouteTargets": [
+          "65048:10050"
+        ]
+      },
+      "peeringOption": "OptionB"
+    }
+  },
+  "name": "NFName",
+  "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
+  "networkFabricSku": "M4-A400-A100-C16-aa",
+  "operationalState": "Deprovisioned",
+  "provisioningState": "Succeeded",
+  "rackCount": 3,
+  "racks": [
+    "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-aggrack",
+    "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack1",
+    "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack2"
+  ],
+  "resourceGroup": "NFResourceGroup",
+  "routerId": null,
+  "serverCountPerRack": 8,
+  "systemData": {
+    "createdAt": "2023-XX-XXT19:30:23.319643+00:00",
+    "createdBy": "email@address.com",
+    "createdByType": "User",
+    "lastModifiedAt": "2023-XX-XXT06:47:36.130713+00:00",
+    "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+    "lastModifiedByType": "Application"
+  },
+  "tags": null,
+  "terminalServerConfiguration": {
+    "networkDeviceId": null,
+    "password": null,
+    "primaryIpv4Prefix": "20.0.1.12/30",
+    "primaryIpv6Prefix": null,
+    "secondaryIpv4Prefix": "20.0.0.12/30",
+    "secondaryIpv6Prefix": null,
+    "serialNumber": "XXXXXXXXXXXXX",
+    "username": "XXXX"
+  },
+  "type": "microsoft.managednetworkfabric/networkfabrics"
}+ ```
-After the operationalState is no longer `Provisioned`, delete the Network Fabric
+## Deleting Fabric
+
+To delete the fabric the operational state of Fabric shouldn't be "Provisioned". To change the operational state from Provisioned to Deprovision, run the deprovision command. Ensure there are no racks associated before deleting fabric.
+ ```azurecli
-az nf fabric delete --resource-group "NFResourceGroup" --resource-name "NFName"
+az nf fabric delete --resource-group "NFResourceGroup" --resource-name "NFName"
+
+```
+
+Expected output:
+
+```output
+{
+  "annotation": null,
+  "fabricAsn": 65044,
+  "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+  "ipv4Prefix": "10.21.0.0/16",
+  "ipv6Prefix": "10:15:0:0::/59",
+  "l2IsolationDomains": null,
+  "l3IsolationDomains": null,
+  "location": "eastus",
+  "managementNetworkConfiguration": {
+    "infrastructureVpnConfiguration": {
+      "administrativeState": "Enabled",
+      "networkToNetworkInterconnectId": null,
+      "optionAProperties": null,
+      "optionBProperties": {
+        "exportRouteTargets": [
+          "65044:10039"
+        ],
+        "importRouteTargets": [
+          "65044:10039"
+        ]
+      },
+      "peeringOption": "OptionB"
+    },
+    "workloadVpnConfiguration": {
+      "administrativeState": "Enabled",
+      "networkToNetworkInterconnectId": null,
+      "optionAProperties": null,
+      "optionBProperties": {
+        "exportRouteTargets": [
+          "65044:10050"
+        ],
+        "importRouteTargets": [
+          "65044:10050"
+        ]
+      },
+      "peeringOption": "OptionB"
+    }
+  },
+  "name": "nffab2030823",
+  "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
+  "networkFabricSku": "SKU-Name",
+  "operationalState": "Deprovisioned",
+  "provisioningState": "Deleting",
+  "rackCount": 3,
+  "racks": [
+    "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-aggrack",
+    "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack1",
+    "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack2"
+  ],
+  "resourceGroup": "NFResourceGroup",
+  "routerId": null,
+  "serverCountPerRack": 7,
+  "systemData": {
+    "createdAt": "2023-XX-XXT10:31:22.423399+00:00",
+    "createdBy": "email@address.com",
+    "createdByType": "User",
+    "lastModifiedAt": "2023-XX-XXT06:31:41.675991+00:00",
+    "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+    "lastModifiedByType": "Application"
+  },
+  "tags": null,
+  "terminalServerConfiguration": {
+    "networkDeviceId": null,
+    "password": null,
+    "primaryIpv4Prefix": "20.0.1.68/30",
+    "primaryIpv6Prefix": null,
+    "secondaryIpv4Prefix": "20.0.0.68/30",
+    "secondaryIpv6Prefix": null,
+    "serialNumber": "XXXXXXXXXXXXX",
+    "username": "XXXX"
+  },
+  "type": "microsoft.managednetworkfabric/networkfabrics"
+}
+```
+After successfully deleting the Network Fabric, when you run a show of the same fabric, you won't find any resources available.
+
+```azurecli
+az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
+```
+
+Expected output:
+```output
+Command group 'nf' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+(ResourceNotFound) The Resource 'Microsoft.ManagedNetworkFabric/NetworkFabrics/NFName' under resource group 'NFResourceGroup' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
+Code: ResourceNotFound
```
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
When you use the integrated Dynatrace experience in Azure portal, the following
## Prerequisites
-Before you link the subscription to a Dynatrace environment,[complete the pre-deployment configuration.](dynatrace-link-to-existing.md).
+Before you link the subscription to a Dynatrace environment,[complete the pre-deployment configuration.](dynatrace-how-to-configure-prereqs.md).
### Find Offer
Use the Azure portal to find Azure Native Dynatrace Service application.
## Next steps -- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
+- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
role-based-access-control Classic Administrators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/classic-administrators.md
This article describes how to add or change the Co-Administrator and Service Adm
## Add a Co-Administrator > [!TIP]
-> You only need to add a Co-Administrator if the user needs to manage Azure classic deployments by using [Azure Service Management PowerShell Module](/powershell/module/servicemanagement/azure.service). If the user only uses the Azure portal to manage the classic resources, you wonΓÇÖt need to add the classic administrator for the user.
+> You only need to add a Co-Administrator if the user needs to manage Azure classic deployments by using [Azure Service Management PowerShell Module](/powershell/azure/servicemanagement/install-azure-ps). If the user only uses the Azure portal to manage the classic resources, you wonΓÇÖt need to add the classic administrator for the user.
1. Sign in to the [Azure portal](https://portal.azure.com) as the Service Administrator or a Co-Administrator.
sap Get Sap Installation Media https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md
description: Learn how to download the necessary SAP media for installing the SA
Previously updated : 02/03/2023 Last updated : 04/06/2023 #Customer intent: As a developer, I want to download the necessary SAP media for installing the SAP software and upload it for us with Azure Center for SAP solutions.
The following operating system (OS) software versions are compatible with these
| SUSE | SLES 15sp3-gen2 latest | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 | | SUSE | SLES 12sp4-gen2 latest | S/4HANA 1909 SPS 03 | -- You can use `latest` if you want to use the latest image and not a specific older version. If the *latest* image version is newly released in marketplace and has an unforseen issue, the deployment may fail. If you are using Portal for deployment, we recommend choosing a different image *sku train* (e.g. 12-SP4 instead of 15-SP3) till the issues are resolved. However, if deploying via API/CLI, you can provide any other *image version* which is available. To view and select the available image versions from a publisher, use below commands
+- You can use `latest` if you want to use the latest image and not a specific older version. If the *latest* image version is newly released in marketplace and has an unforeseen issue, the deployment may fail. If you are using Portal for deployment, we recommend choosing a different image *sku train* (e.g. 12-SP4 instead of 15-SP3) till the issues are resolved. However, if deploying via API/CLI, you can provide any other *image version* which is available. To view and select the available image versions from a publisher, use below commands
```Powershell
Before downloading the SAP software, set up an Azure Storage account to store th
Next, set up a virtual machine (VM) where you will download the SAP components later.
-1. Create a **Ubuntu 20.04** VM in Azure. For more information, see [how to create a Linux VM in the Azure portal](../../virtual-machines/linux/quick-create-portal.md).
+1. Create an **Ubuntu 20.04** VM in Azure. For more information, see [how to create a Linux VM in the Azure portal](../../virtual-machines/linux/quick-create-portal.md).
1. Sign in to the VM.
Next, download the SAP installation media to the VM using a script.
1. Where `BOM_directory_path` is the absolute path to **SAP-automation-samples/SAP**. e.g. */home/loggedinusername/SAP-automation-samples/SAP*
-1. Where `orchestration_ansible_user` is the user with **admin** privileges. e.g. root.
+1. Where `orchestration_ansible_user` is the user with **admin** privileges like (e.g. root).
Now you can [install the SAP software](install-software.md) through Azure Center for SAP solutions.
First, set up an Azure Storage account for the SAP components:
1. Grant the roles **Storage Blob Data Reader** and **Reader and Data Access** to the user-assigned managed identity, which you used during infrastructure deployment.
-1. Create a container within the storage account. You can choose any container name, such as **sapbits**.
+1. Create a container within the storage account. You can choose any container name, such as `sapbits`.
-1. Create a folder within the container, named **sapfiles**.
+1. Create a folder within the container, named `sapfiles`.
-1. Go to the **sapfiles** folder.
+1. Go to the `sapfiles` folder.
-1. Create two subfolders named **archives** and **boms**.
+1. Create two subfolders named `archives` and `boms`.
-1. In the **boms** folder, create four subfolders with the following names, depending on the SAP version that you're using..
+1. In the `boms` folder, create four subfolders with the following names, depending on the SAP version that you're using:
1. For S/4HANA 1909 SPS 03:
Next, upload the SAP software files to the storage account:
1. For S/4HANA 1909 SPS 03:
- 1. [S41909SPS03_v0011ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml)
+ 1. [S41909SPS03_v0011ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml)
1. [HANA_2_00_059_v0004ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml)
Next, upload the SAP software files to the storage account:
1. For S/4HANA 2020 SPS 03:
- 1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
+ 1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
- 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
+ 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
Next, upload the SAP software files to the storage account:
1. For S/4HANA 2021 ISS 00:
- 1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)
+ 1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)
- 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
+ 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
1. [SWPM20SP13_latest.yaml](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
Next, upload the SAP software files to the storage account:
1. For S/4HANA 1909 SPS 03:
- 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_059_v0005ms/HANA_2_00_059_v0005ms.yaml)
+ 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/HANA_2_00_055_v1_install.rsp.j2)
- 1. [S41909SPS03_v0011ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-app-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-app-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-dbload-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-dbload-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-ers-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-ers-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-generic-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-generic-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-pas-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-pas-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scs-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scs-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scsha-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scsha-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-web-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-web-inifile-param.j2)
1. For S/4HANA 2020 SPS 03:
- 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_059_v0005ms/HANA_2_00_059_v0005ms.yaml)
+ 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/HANA_2_00_055_v1_install.rsp.j2)
- 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_install.rsp.j2)
+ 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/HANA_2_00_install.rsp.j2)
- 1. [S42020SPS03_v0003ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-app-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-app-inifile-param.j2)
- 1. [S42020SPS03_v0003ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-dbload-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-dbload-inifile-param.j2)
- 1. [S42020SPS03_v0003ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-ers-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-ers-inifile-param.j2)
- 1. [S42020SPS03_v0003ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-generic-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-generic-inifile-param.j2)
- 1. [S42020SPS03_v0003ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-pas-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-pas-inifile-param.j2)
- 1. [S42020SPS03_v0003ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scs-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scs-inifile-param.j2)
- 1. [S42020SPS03_v0003ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scsha-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scsha-inifile-param.j2)
1. For S/4HANA 2021 ISS 00:
- 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_059_v0005ms/HANA_2_00_059_v0005ms.yaml)
+ 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_055_v1_install.rsp.j2)
- 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_install.rsp.j2)
+ 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_install.rsp.j2)
- 1. [NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_ASCS_S4HANA2021.CORE.HDB.ABAP_Distributed.params)
+ 1. [NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_ASCS_S4HANA2021.CORE.HDB.ABAP_Distributed.params)
- 1. [NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params)
+ 1. [NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params)
- 1. [NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params)
+ 1. [NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params)
- 1. [NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params)
+ 1. [NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params)
- 1. [NW_Users_Create-GENERIC.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_Users_Create-GENERIC.HDB.PD_Distributed.params)
+ 1. [NW_Users_Create-GENERIC.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/NW_Users_Create-GENERIC.HDB.PD_Distributed.params)
- 1. [S4HANA_2021_ISS_v0001ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-app-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-app-inifile-param.j2)
- 1. [S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2)
- 1. [S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2)
- 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2)
- 1. [S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2)
- 1. [S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2)
- 1. [S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2)
- 1. [S4HANA_2021_ISS_v0001ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-web-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/SAP-automation-samples/main/SAP/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-web-inifile-param.j2)
-1. Upload all the files that you downloaded to the **templates** folder.
+1. Upload all the files that you downloaded to the `templates` folder.
-1. Go back to the **sapfiles** folder, then go to the **archives** subfolder.
+1. Go back to the `sapfiles` folder, then go to the `archives` subfolder.
1. Download all packages that aren't labeled as `download: false` from the main BOM URL. Choose the packages based on your SAP version. You can use the URL mentioned in the BOM to download each package. Make sure to download the exact package versions listed in each BOM. 1. For S/4HANA 1909 SPS 03:
- 1. [S41909SPS03_v0011ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml)
+ 1. [S41909SPS03_v0011ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml)
1. [HANA_2_00_059_v0004ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml)
Next, upload the SAP software files to the storage account:
1. For S/4HANA 2020 SPS 03:
- 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
+ 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
- 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
+ 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
Next, upload the SAP software files to the storage account:
1. For S/4HANA 2021 ISS 00:
- 1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)
+ 1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)
- 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
+ 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml)
1. [SWPM20SP13_latest.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/SWPM20SP13_latest/SWPM20SP13_latest.yaml)
Next, upload the SAP software files to the storage account:
1. Repeat the previous step for the main and dependent BOM files.
-1. Upload all the packages that you downloaded to the **archives** folder. Don't rename the files.
+1. Upload all the packages that you downloaded to the `archives` folder. Don't rename the files.
1. Optionally, install other packages that aren't required. 1. Download the package files.
- 1. Upload the files to the **archives** folder.
+ 1. Upload the files to the `archives` folder.
1. Open the `S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms` YAML file for the BOM. 1. Edit the information for each optional package to `download:true`.
- 1. Save and reupload the YAML file. Make sure you only have one YAML file in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the **boms** folder.
+ 1. Save and reupload the YAML file. Make sure you only have one YAML file in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the `boms` folder.
Now you can [install the SAP software](install-software.md) through Azure Center for SAP solutions.
sentinel Github Using Webhooks Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/github-using-webhooks-using-azure-function.md
To integrate with GitHub (using Webhooks) (using Azure Function) make sure you h
> [!NOTE]
- > This connector has been built on http trigger based Azure Function. And it provides an endpoint to which github will be connected through it's webhook capability and posts the subscribed events into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+ > This connector has been built on http trigger based Azure Function. And it provides an endpoint to which GitHub will be connected through it's webhook capability and posts the subscribed events into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](../../app-service/app-service-key-vault-references.md) to use Azure Key Vault with an Azure Function App.
If you're already signed in, go to the next step.
5. Subscribe for events and Click on "Add Webhook"
-*Now we are done with the github Webhook configuration. Once the github events triggered and after the delay of 20 to 30 mins (As there will be a dealy for LogAnalytics to spin up the resources for the first time), you should be able to see all the transactional events from the GitHub into LogAnalytics workspace table called "githubscanaudit_CL".*
+*Now we are done with the GitHub Webhook configuration. Once the GitHub events triggered and after the delay of 20 to 30 mins (As there will be a dealy for LogAnalytics to spin up the resources for the first time), you should be able to see all the transactional events from the GitHub into LogAnalytics workspace table called "githubscanaudit_CL".*
For more details, Click [here](https://aka.ms/sentinel-gitHubwebhooksteps)
spring-apps Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-metrics.md
You can use two kinds of filters (properties):
* App: filter by app name * Instance: filter by app instance
+* Deployment: filter by deployment name
:::image type="content" source="media/concept-metrics/add-filter.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Metrics page with a chart selected and the Add filter controls highlighted." lightbox="media/concept-metrics/add-filter.png":::
You can also use the **Apply splitting** option, which will draw multiple lines
## User metrics options
+> [!NOTE]
+> For spring boot applications, you need to [add spring-boot-starter-actuator dependency](concept-manage-monitor-app-spring-boot-actuator.md#add-actuator-dependency) to see metrics from spring boot actuator.
+ The following tables show the available metrics and details. ### Error
spring-apps Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot.md
When you're debugging application crashes, start by checking the running status
* Gradual memory leaks. For more information, see [Metrics](./concept-metrics.md).
+ > [!NOTE]
+ > These metrics are available only for spring-boot applications, and you need to [add spring-boot-starter-actuator dependency](concept-manage-monitor-app-spring-boot-actuator.md#add-actuator-dependency) to enable these metrics.
* If the application fails to start, verify that the application has valid jvm parameters. If jvm memory is set too high, the following error message might appear in your logs:
storage Create Data Lake Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/create-data-lake-storage-account.md
Title: Create a storage account for Azure Data Lake Storage Gen2 description: Learn how to create a storage account for use with Azure Data Lake Storage Gen2.-+ -+ Last updated 03/09/2023
storage Data Lake Storage Access Control Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control-model.md
Title: Access control model for Azure Data Lake Storage Gen2 description: Learn how to configure container, directory, and file-level access in accounts that have a hierarchical namespace.-+ Last updated 03/09/2023-+
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
Title: Access control lists in Azure Data Lake Storage Gen2 description: Understand how POSIX-like ACLs access control lists work in Azure Data Lake Storage Gen2.-+ Last updated 03/09/2023-+ ms.devlang: python
storage Data Lake Storage Acl Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-azure-portal.md
Title: Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2 description: Use the Azure portal to manage access control lists (ACLs) in storage accounts that have a hierarchical namespace (HNS) enabled.-+ Last updated 03/09/2023-+ # Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2
storage Data Lake Storage Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-cli.md
Title: Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2
description: Use the Azure CLI to manage access control lists (ACL) in storage accounts that have a hierarchical namespace. -+ Last updated 02/17/2021-+ ms.devlang: azurecli
storage Data Lake Storage Directory File Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-cli.md
Title: Use Azure CLI to manage data (Azure Data Lake Storage Gen2)
description: Use the Azure CLI to manage directories and files in storage accounts that have a hierarchical namespace. -+ Last updated 02/17/2021-+ ms.devlang: azurecli
storage Data Lake Storage Explorer Acl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer-acl.md
Title: 'Storage Explorer: Set ACLs in Azure Data Lake Storage Gen2' description: Use the Azure Storage Explorer to manage access control lists (ACLs) in storage accounts that have hierarchical namespace (HNS) enabled.-+ Last updated 03/09/2023-+ # Use Azure Storage Explorer to manage ACLs in Azure Data Lake Storage Gen2
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
The following clients are known to be incompatible with SFTP for Azure Blob Stor
- paramiko 1.16.0 - SSH.NET 2016.1.0
-The unsupported client list above is not exhaustive and may change over time.
+The unsupported client list above isn't exhaustive and may change over time.
## Client settings
To transfer files to or from Azure Blob Storage via SFTP clients, see the follow
| Category | Unsupported operations | |||
-| ACLs | <li>`chgrp` - change group<li>`chmod` - change permissions/mode<li>`chown` - change owner<li>`put/get -p` - preserving permissions |
+| ACLs | <li>`chgrp` - change group<li>`chmod` - change permissions/mode<li>`chown` - change owner<li>`put/get -p` - preserving properties such as permissions and timestamps |
| Resuming Uploads | `reput`. `put -a` | | Random writes and appends | <li>Operations that include both READ and WRITE flags. For example: [SSH.NET create API](https://github.com/sshnet/SSH.NET/blob/develop/src/Renci.SshNet/SftpClient.cs#:~:text=public%20SftpFileStream-,Create,-(string%20path))<li>Operations that include APPEND flag. For example: [SSH.NET append API](https://github.com/sshnet/SSH.NET/blob/develop/src/Renci.SshNet/SftpClient.cs#:~:text=public%20void-,AppendAllLines,-(string%20path%2C%20IEnumerable%3Cstring%3E%20contents)). | | Links |<li>`symlink` - creating symbolic links<li>`ln` - creating hard links<li>Reading links not supported |
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
- To access the storage account using SFTP, your network must allow traffic on port 22. -- Static IP addresses aren't supported for storage accounts. This is not an SFTP specific limitation.
+- Static IP addresses aren't supported for storage accounts. This isn't an SFTP specific limitation.
-- Internet routing is not supported. Use Microsoft network routing.
+- Internet routing isn't supported. Use Microsoft network routing.
-- There's a 2 minute time out for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
+- There's a 2-minute time out for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
## Other
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
## Troubleshooting -- To resolve the `Failed to update SFTP settings for account 'accountname'. Error: The value 'True' is not allowed for property isSftpEnabled.` error, ensure that the following pre-requisites are met at the storage account level:
+- To resolve the `Failed to update SFTP settings for account 'accountname'. Error: The value 'True' isn't allowed for property isSftpEnabled.` error, ensure that the following prerequisites are met at the storage account level:
- The account needs to be a general-purpose v2 and premium block blob accounts.
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
Add this code to the end of the `Main` method:
String connectStr = System.getenv("AZURE_STORAGE_CONNECTION_STRING"); // Create a BlobServiceClient object using a connection string
-BlobServiceClient client = new BlobServiceClientBuilder()
+BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
.connectionString(connectStr) .buildClient();
To see Blob storage sample apps, continue to:
> [Azure Blob Storage library for Java samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-blob/src/samples/java/com/azure/storage/blob) - To learn more, see the [Azure Blob Storage client libraries for Java](/java/api/overview/azure/storage-blob-readme).-- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Java developers](/azure/developer/java/sdk/overview).
+- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Java developers](/azure/developer/java/sdk/overview).
storage Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md
The following tutorial explains how to migrate an existing application to connec
## Sign-in and migrate the app code to use passwordless connections
-For local development, make sure you're authenticated with the same Azure AD account you assigned the role to on your Blob Storage account. You can authenticate via the Azure CLI, Visual Studio, Azure PowerShell, or other tools such as IntelliJ.
- [!INCLUDE [default-azure-credential-sign-in](../../../includes/passwordless/default-azure-credential-sign-in.md)] Next, update your code to use passwordless connections.
storage Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Storage
+ Title: Azure Resource Graph sample queries
+ description: Sample Azure Resource Graph queries for Azure Storage showing use of resource types and tables to access Azure Storage related resources and properties.++ Last updated 07/07/2022 --+
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
The following release notes are for version 16.0.0.0 of the Azure File Sync agen
- Azure File Sync is now a zone-redundant service which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully leverage this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see: [Azure Storage redundancy](../common/storage-redundancy.md). > [!Note] > Azure File Sync is zone-redundant in all regions that [support zones](../../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support) except US Gov Virginia.--- Sync upload performance improvements
- - Sync upload performance has been improved. This improvement will mainly benefit file share migrations (initial upload) and high churn events on the server in which a large number of files need to be uploaded.
- Immediately run server change enumeration to detect files changes that were missed on the server - Azure File Sync uses the [Windows USN journal](/windows/win32/fileio/change-journals) feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files will not sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the Invoke-StorageSyncServerChangeDetection PowerShell cmdlet to immediately run server change enumeration on a server endpoint path.
The following release notes are for version 16.0.0.0 of the Azure File Sync agen
> [!Note] > By default, the server change enumeration scan will only check the modified timestamp. To perform a deeper check, use the -DeepScan parameter.
+- Bug fix for the PowerShell script FileSyncErrorsReport.ps1
+ - Miscellaneous reliability and telemetry improvements for cloud tiering and sync ### Evaluation Tool
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 managed disk
description: Learn how to deploy a Premium SSD v2. Previously updated : 03/16/2023 Last updated : 04/10/2023
Azure Premium SSD v2 is designed for IO-intense enterprise workloads that require sub-millisecond disk latencies and high IOPS and throughput at a low cost. Premium SSD v2 is suited for a broad range of workloads such as SQL server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big dat#premium-ssd-v2).
+Premium SSD v2 support a 4k physical sector size by default, but can be configured to use a 512E sector size as well. While most applications are compatible with 4k sector sizes, some require 512 byte sector sizes. Oracle Database, for example, requires release 12.2 or later in order to support 4k native disks. For older versions of Oracle DB, 512 byte sector size is required.
+ ## Limitations [!INCLUDE [disks-prem-v2-limitations](../../includes/disks-prem-v2-limitations.md)]
Now that you know the region and zone to deploy to, follow the deployment steps
# [Azure CLI](#tab/azure-cli)
-Create a Premium SSD v2 disk in an availability zone. Then create a VM in the same region and availability zone that supports Premium Storage and attach the disk to it. Replace the values of all the variables with your own, then run the following script:
+Create a Premium SSD v2 disk in an availability zone. Then create a VM in the same region and availability zone that supports Premium Storage and attach the disk to it. The following script creates a Premium SSD v2 with a 4k sector size, to deploy one with a 512 sector size, update the `$logicalSectorSize` parameter. Replace the values of all the variables with your own, then run the following script:
```azurecli-interactive ## Initialize variables
diskName="yourDiskName"
resourceGroupName="yourResourceGroupName" region="yourRegionName" zone="yourZoneNumber"
+##Replace 4096 with 512 to deploy a disk with 512 sector size
logicalSectorSize=4096 vmName="yourVMName" vmImage="Win2016Datacenter"
az vm create -n $vmName -g $resourceGroupName \
# [PowerShell](#tab/azure-powershell)
-Create a Premium SSD v2 disk in an availability zone. Then create a VM in the same region and availability zone that supports Premium Storage and attach the disk to it. Replace the values of all the variables with your own, then run the following script:
+Create a Premium SSD v2 disk in an availability zone. Then create a VM in the same region and availability zone that supports Premium Storage and attach the disk to it. The following script creates a Premium SSD v2 with a 4k sector size, to deploy one with a 512 sector size, update the `$logicalSectorSize` parameter. Replace the values of all the variables with your own, then run the following script:
```powershell # Initialize variables
$diskName = "yourDiskName"
$diskSizeInGiB = 100 $diskIOPS = 5000 $diskThroughputInMBPS = 150
+#To use a 512 sector size, replace 4096 with 512
$logicalSectorSize=4096 $lun = 1 $vmName = "yourVMName"
Update-AzVM -VM $vm -ResourceGroupName $resourceGroupName
:::image type="content" source="media/disks-deploy-premium-v2/premv2-select.png" alt-text="Screenshot selecting Premium SSD v2 SKU." lightbox="media/disks-deploy-premium-v2/premv2-select.png":::
+1. Select whether you'd like to deploy a 4k or 512 logical sector size.
+
+ :::image type="content" source="media/disks-deploy-premium-v2/premv2-sector-size.png" alt-text="Screenshot of deployment logical sector size deployment options." lightbox="media/disks-deploy-premium-v2/premv2-sector-size.png":::
+ 1. Proceed through the rest of the VM deployment, making any choices that you desire. You've now deployed a VM with a premium SSD v2.
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 03/16/2023 Last updated : 04/10/2023
If you would like to start using ultra disks, see the article on [using Azure ul
Azure Premium SSD v2 is designed for IO-intense enterprise workloads that require consistent sub-millisecond disk latencies and high IOPS and throughput at a low cost. The performance (capacity, throughput, and IOPS) of Premium SSD v2 disks can be independently configured at any time, making it easier for more scenarios to be cost efficient while meeting performance needs. For example, a transaction-intensive database workload may need a large amount of IOPS at a small size, or a gaming application may need a large amount of IOPS during peak hours. Premium SSD v2 is suited for a broad range of workloads such as SQL server, Oracle, MariaDB, SAP, Cassandra, Mongo DB, big data/analytics, and gaming, on virtual machines or stateful containers.
+Premium SSD v2 support a 4k physical sector size by default, but can be configured to use a 512E sector size as well. While most applications are compatible with 4k sector sizes, some require 512 byte sector sizes. Oracle Database, for example, requires release 12.2 or later in order to support 4k native disks. For older versions of Oracle DB, 512 byte sector size is required.
+ ### Differences between Premium SSD and Premium SSD v2 Unlike Premium SSDs, Premium SSD v2 doesn't have dedicated sizes. You can set a Premium SSD v2 to any supported size you prefer, and make granular adjustments to the performance without downtime. Premium SSD v2 doesn't support host caching but, benefits significantly from lower latency which addresses some of the same core problems host caching addresses. The ability to adjust IOPS, throughput, and size at any time also means you can avoid the maintenance overhead of having to stripe disks to meet your needs.
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
Title: Azure Compute - Linux diagnostic extension 4.0
-description: How to configure the Azure Linux diagnostic extension (LAD) 4.0 to collect metrics and log events from Linux VMs running in Azure.
+description: Learn how to configure the Azure Linux diagnostic extension (LAD) 4.0 to collect metrics and log events from Linux VMs in Azure.
Previously updated : 02/05/2021 Last updated : 04/04/2023 ms.devlang: azurecli # Use the Linux diagnostic extension 4.0 to monitor metrics and logs
-This document describes the latest versions of the Linux diagnostic extension (LAD).
+This article describes the latest versions of the Linux diagnostic extension (LAD).
> [!IMPORTANT]
-> For information about version 3.x, see [Use the Linux diagnostic extension 3.0 to monitor metrics and logs](./diagnostics-linux-v3.md).
+> For information about version 3.x, see [Use the Linux diagnostic extension 3.0 to monitor metrics and logs](./diagnostics-linux-v3.md).
> For information about version 2.3 and earlier, see [Monitor the performance and diagnostic data of a Linux VM](/previous-versions/azure/virtual-machines/linux/classic/diagnostic-extension-v2).
-## Introduction
-
-the Linux diagnostic extension helps a user monitor the health of a Linux VM running on Microsoft Azure. It has the following collection and capabilities:
+The Linux diagnostic extension helps you monitor the health of a Linux VM on Microsoft Azure. It has the following capabilities:
| Data source | Customization options | Required destinations | Optional destinations |
-| -- | | -- | |
-| Metrics | [Counter, Aggregation, Sample Rate, Specifiers](#performancecounters) | Azure Table Storage | EventHub, Azure Blob Storage (JSON format), Azure Monitor<sup>1</sup> |
-| Syslog | [Facility, Severity Level](#syslogevents) | Azure Table Storage | EventHub, Azure Blob Storage (JSON Format)
-| Files | [Log Path, Destination Table](#filelogs) | Azure Table Storage | EventHub, Azure Blob Storage (JSON Format)
-
-<sup>1</sup> New in LAD 4.0
-
-This extension works with both Azure deployment models (Azure Resource Manager and classic).
-
-## Install the extension
+| -- | | | |
+| Metrics | [Counter, Aggregation, Sample Rate, Specifiers](#performancecounters) | Azure Table Storage | EventHub, Azure Blob Storage (JSON format), Azure Monitor (new in LAD 4.0) |
+| Syslog | [Facility, Severity Level](#syslogevents) | Azure Table Storage | EventHub, Azure Blob Storage (JSON Format) |
+| Files | [Log Path, Destination Table](#filelogs) | Azure Table Storage | EventHub, Azure Blob Storage (JSON Format) |
-You can enable this extension for your VM and virtual machine scale set by using the Azure PowerShell cmdlets, Azure CLI scripts, Azure Resource Manager templates (ARM templates), or the Azure portal. For more information, see [Extensions and features](features-linux.md).
-
->[!NOTE]
->Some components of the Linux Diagnostic VM extension are also shipped in the [Log Analytics VM extension](./oms-linux.md). Because of this architecture, conflicts can arise if both extensions are instantiated in the same ARM template.
->
->To avoid install-time conflicts, use the [`dependsOn` directive](../../azure-resource-manager/templates/resource-dependency.md#dependson) to install the extensions sequentially. The extensions can be installed in either order.
-
-Use the installation instructions and a [downloadable sample configuration](https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json) to configure LAD 4.0 to:
+This extension works with both Azure deployment models: Azure Resource Manager and classic.
-* Capture and store the same metrics that LAD versions 2.3 and 3.x provided.
-* Send metrics to the Azure Monitor sink along with the usual sink to Azure Storage. This functionality is new in LAD 4.0.
-* Capture a useful set of file system metrics, as in LAD 3.0.
-* Capture the default syslog collection enabled by LAD 2.3.
-* Enable the Azure portal experience for charting and alerting on VM metrics.
+## Prerequisites
-The downloadable configuration is just an example. Please modify it to suit your needs.
+- **Azure Linux agent version 2.2.0 or later**. Most Azure VM Linux gallery images include version 2.2.7 or later. Run `/usr/sbin/waagent -version` to confirm the version installed on the VM. If the VM runs an older version of the guest agent, [update the Linux agent](./update-linux-agent.md).
+- **Azure CLI**. [Set up the Azure CLI](/cli/azure/install-azure-cli) environment on your machine.
+- **The `wget` command**. If you don't already have it, install it using the corresponding package manager.
+- **An Azure subscription and general purpose storage account** to store the data. General purpose storage accounts support table storage, which is required. A blob storage account doesn't work.
+- **Python 2**.
### Supported Linux distributions
-The Linux diagnostic extension supports many distributions and versions. The following list of distributions and versions applies only to Azure-endorsed Linux vendor images. The extension generally doesn't support third-party BYOL and BYOS images, like appliances.
-
-A distribution that lists only major versions, like Debian 7, is also supported for all minor versions. If a specific minor version is specified, only that version is supported. If a plus sign (+) is appended, minor versions equal to or later than the specified version are supported.
-
-Supported distributions and versions:
+The following distributions and versions include only Azure-endorsed Linux vendor images. The extension generally doesn't support third-party BYOL and BYOS images, like appliances.
- Ubuntu 18.04, 16.04, 14.04 - CentOS 8, 7, 6.5+ - Oracle Linux 7, 6.4+ - OpenSUSE 13.1+-- SUSE Linux Enterprise Server 12
+- SUSE Linux Enterprise Server 12 SP5
- Debian 9, 8, 7-- Red Hat Enterprise Linux (RHEL) 7, 6.7+
+- Red Hat Enterprise Linux (RHEL) 7.9
- Alma Linux 8 - Rocky Linux 8
-### Prerequisites
-
-* **Azure Linux agent version 2.2.0 or later**. Most Azure VM Linux gallery images include version 2.2.7 or later. Run `/usr/sbin/waagent -version` to confirm the version installed on the VM. If the VM is running an older version of the guest agent, [update the Linux agent](./update-linux-agent.md).
-* **Azure CLI**. [Set up the Azure CLI](/cli/azure/install-azure-cli) environment on your machine.
-* **The `wget` command**. If you don't already have it, install it using the corresponding package manager.
-* **An Azure subscription and general purpose storage account** to store the data. General purpose storage accounts support table storage, which is required. A blob storage account won't work.
-* **Python 2**.
+A distribution that lists only major versions, like Debian 7, is also supported for all minor versions. If a specific minor version is specified, only that version is supported. If a plus sign (+) is appended, minor versions equal to or later than the specified version are supported.
### Python requirement
->[!NOTE]
->We are currently planning to converge all versions of the Linux Diagnostic Extensions (LAD) with the new Azure Monitoring Agent - which already supports Python 3. We expect to ship this early to mid 2022; after which the LAD will be scheduled for deprecation pending announcement and approval.
->
+The Linux diagnostic extension requires Python 2. If your virtual machine uses a distribution that doesn't include Python 2, install it.
-The Linux diagnostic extension requires Python 2. If your virtual machine uses a distribution that doesn't include Python 2 by default, install it.
+> [!NOTE]
+> We are currently planning to converge all versions of the Linux Diagnostic Extensions (LAD) with the new Azure Monitoring Agent, which already supports Python 3. The LAD will be scheduled for deprecation pending announcement and approval.
+>
-The following sample commands install Python 2 on various distributions:
+To install Python 2, run one of the following sample commands:
- Red Hat, CentOS, Oracle: `yum install -y python2` - Ubuntu, Debian: `apt-get install -y python2` - SUSE: `zypper install -y python2`
-The `python2` executable file must be aliased to *python*. Here's one way to achieve this:
+The `python2` executable file must be aliased to `python`.
1. Run the following command to remove any existing aliases.
- ```bash
- sudo update-alternatives --remove-all python
- ```
+ ```bash
+ sudo update-alternatives --remove-all python
+ ```
-2. Run the following command to create the new alias.
+1. Run the following command to create the new alias.
- ```bash
- sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1
- ```
+ ```bash
+ sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1
+ ```
+
+## Install the extension
+
+You can enable this extension for your VM and Virtual Machine Scale Set by using the Azure PowerShell cmdlets, Azure CLI scripts, Azure Resource Manager templates (ARM templates), or the Azure portal. For more information, see [Virtual machine extensions and features for Linux](features-linux.md).
+
+> [!NOTE]
+>
+> Some components of the Linux Diagnostic VM extension are also shipped in the [Log Analytics VM extension](./oms-linux.md). Conflicts can arise if both extensions are instantiated in the same ARM template.
+>
+>To avoid install-time conflicts, use the [`dependsOn` directive](../../azure-resource-manager/templates/resource-dependency.md#dependson) to install the extensions sequentially. The extensions can be installed in either order.
+
+Use the installation instructions and a [downloadable sample configuration](https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json) to configure LAD 4.0 to:
+
+- Capture and store the same metrics that LAD versions 2.3 and 3.x provided.
+- Send metrics to the Azure Monitor sink along with the usual sink to Azure Storage. This functionality is new in LAD 4.0.
+- Capture a useful set of file system metrics, as in LAD 3.0.
+- Capture the default syslog collection enabled by LAD 2.3.
+- Enable the Azure portal experience for charting and alerting on VM metrics.
+
+The downloadable configuration is just an example. Modify it to suit your needs.
### Installation
-You can install and configure LAD 4.0 in the Azure CLI or in PowerShell.
+You can install and configure LAD 4.0 in the Azure CLI or in Azure PowerShell.
# [Azure CLI](#tab/azcli) If your protected settings are in the file *ProtectedSettings.json* and your public configuration information is in *PublicSettings.json*, run this command: ```azurecli
-az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group <resource_group_name> --vm-name <vm_name> --protected-settings ProtectedSettings.json --settings PublicSettings.json
+az vm extension set --publisher Microsoft.Azure.Diagnostics \
+ --name LinuxDiagnostic --version 4.0 --resource-group <resource_group_name> \
+ --vm-name <vm_name> --protected-settings ProtectedSettings.json \
+ --settings PublicSettings.json
```
-The command assumes you're using the Azure Resource Management mode of the Azure CLI. To configure LAD for classic deployment model VMs, switch to Service Management mode (`azure config mode asm`) and omit the resource group name in the command.
+The command assumes that you're using the Azure Resource Management mode of the Azure CLI. To configure LAD for classic deployment model VMs, switch to Service Management mode (`azure config mode asm`) and omit the resource group name in the command.
For more information, see the [cross-platform CLI documentation](/cli/azure/authenticate-azure-cli).
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/powershell)
If your protected settings are in the `$protectedSettings` variable and your public configuration information is in the `$publicSettings` variable, run this command: ```powershell
-Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> -Location <vm_location> -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 4.0
+Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> `
+ -Location <vm_location> -ExtensionType LinuxDiagnostic `
+ -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic `
+ -SettingString $publicSettings -ProtectedSettingString $protectedSettings `
+ -TypeHandlerVersion 4.0
``` ### Enable auto update
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following PowerShell commands.
+To enable automatic update of the agent, we recommend that you enable the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature:
# [Azure CLI](#tab/azcli) ```azurecli
-az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group <resource_group_name> --vm-name <vm_name> --protected-settings ProtectedSettings.json --settings PublicSettings.json --enable-auto-upgrade true
+az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic \
+ --version 4.0 --resource-group <resource_group_name> --vm-name <vm_name> \
+ --protected-settings ProtectedSettings.json --settings PublicSettings.json \
+ --enable-auto-upgrade true
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/powershell)
```powershell
-Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> -Location <vm_location> -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 4.0 -EnableAutomaticUpgrade $true
+Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> `
+ -Location <vm_location> -ExtensionType LinuxDiagnostic `
+ -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic `
+ -SettingString $publicSettings -ProtectedSettingString $protectedSettings `
+ -TypeHandlerVersion 4.0 -EnableAutomaticUpgrade $true
``` ### Sample installation
+In these examples, the sample configuration collects a set of standard data and sends it to table storage. The URL for the sample configuration and its contents can change.
+ > [!NOTE] > For the following samples, fill in the appropriate values for the variables in the first section before you run the code.
-In these examples, the sample configuration collects a set of standard data and sends it to table storage. The URL for the sample configuration and its contents can change.
+In most cases, you should download a copy of the portal settings JSON file and customize it for your needs. Use templates or your own automation to use a customized version of the configuration file rather than downloading from the URL each time.
-In most cases, you should download a copy of the portal settings JSON file and customize it for your needs. Then use templates or your own automation to use a customized version of the configuration file rather than downloading from the URL each time.
-
-> [!NOTE]
-> When you enable the new Azure Monitor sink, the VMs need to have system-assigned identity enabled to generate Managed Service Identity (MSI) authentication tokens. You can add these settings during or after VM creation.
->
-> For instructions for the Azure portal, the Azure CLI, PowerShell, and Azure Resource Manager, see [Configure managed identities](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md).
+When you enable the new Azure Monitor sink, the VMs need to have system-assigned identity enabled to generate Managed Service Identity (MSI) authentication tokens. You can add these settings during or after VM creation. For instructions for the Azure portal, the Azure CLI, PowerShell, and Azure Resource Manager, see [Configure managed identities](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md).
# [Azure CLI](#tab/azcli)
-#### Installation Sample - Azure CLI
+#### Installation sample - Azure CLI
```azurecli # Set your Azure VM diagnostic variables.
+my_subscription_id=<your_azure_subscription_id>
my_resource_group=<your_azure_resource_group_name_containing_your_azure_linux_vm> my_linux_vm=<your_azure_linux_vm_name> my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnostic_data>
my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnos
az login # Select the subscription that contains the storage account.
-az account set --subscription <your_azure_subscription_id>
+az account set --subscription $my_subscription_id
# Enable system-assigned identity on the existing VM.
-az vm identity assign -g $my_resource_group -n $my_linux_vm
+az vm identity assign --resource-group $my_resource_group --name $my_linux_vm
-# Download the sample public settings. (You could also use curl or any web browser.)
+# Download the sample public settings. You could instead use curl or any web browser.
wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json # Build the VM resource ID. Replace the storage account name and resource ID in the public settings.
-my_vm_resource_id=$(az vm show -g $my_resource_group -n $my_linux_vm --query "id" -o tsv)
+my_vm_resource_id=$(az vm show --resource-group $my_resource_group \
+ --name $my_linux_vm --query "id" -o tsv)
sed -i "s#__DIAGNOSTIC_STORAGE_ACCOUNT__#$my_diagnostic_storage_account#g" portal_public_settings.json sed -i "s#__VM_RESOURCE_ID__#$my_vm_resource_id#g" portal_public_settings.json # Build the protected settings (storage account SAS token).
-my_diagnostic_storage_account_sastoken=$(az storage account generate-sas --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z --permissions wlacu --resource-types co --services bt -o tsv)
-my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}"
+my_diagnostic_storage_account_sastoken=$(az storage account generate-sas \
+ --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z \
+ --permissions wlacu --resource-types co --services bt -o tsv)
+my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', \
+ 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}"
# Finally, tell Azure to install and enable the extension.
-az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group $my_resource_group --vm-name $my_linux_vm --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
+az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic \
+ --version 4.0 --resource-group $my_resource_group --vm-name $my_linux_vm \
+ --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/powershell)
-#### Installation Sample - PowerShell
+#### Installation sample - PowerShell
```powershell $storageAccountName = "yourStorageAccountName"
$publicSettings = $publicSettings.Replace('__VM_RESOURCE_ID__', $vm.Id)
# If you have your own customized public settings, you can inline those rather than using the preceding template: $publicSettings = '{"ladCfg": { ... },}' # Generate a SAS token for the agent to use to authenticate with the storage account
-$sasToken = New-AzStorageAccountSASToken -Service Blob,Table -ResourceType Service,Container,Object -Permission "racwdlup" -Context (Get-AzStorageAccount -ResourceGroupName $storageAccountResourceGroup -AccountName $storageAccountName).Context -ExpiryTime $([System.DateTime]::Now.AddYears(10))
+$sasToken = New-AzStorageAccountSASToken -Service Blob,Table `
+ -ResourceType Service,Container,Object -Permission "racwdlup" `
+ -Context (Get-AzStorageAccount -ResourceGroupName $storageAccountResourceGroup `
+ -AccountName $storageAccountName).Context -ExpiryTime $([System.DateTime]::Now.AddYears(10))
# Build the protected settings (storage account SAS token)
-$protectedSettings="{'storageAccountName': '$storageAccountName', 'storageAccountSasToken': '$sasToken'}"
+$protectedSettings="{'storageAccountName': '$storageAccountName', `
+ 'storageAccountSasToken': '$sasToken'}"
# Finally, install the extension with the settings you built
-Set-AzVMExtension -ResourceGroupName $VMresourceGroup -VMName $vmName -Location $vm.Location -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 4.0
+Set-AzVMExtension -ResourceGroupName $VMresourceGroup -VMName $vmName `
+ -Location $vm.Location -ExtensionType LinuxDiagnostic `
+ -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic `
+ -SettingString $publicSettings -ProtectedSettingString $protectedSettings `
+ -TypeHandlerVersion 4.0
```
-#### Installation Sample for virtual machine scale sets - Azure CLI
+#### Installation sample for Virtual Machine Scale Sets - Azure CLI
```azurecli # Set your Azure virtual machine scale set diagnostic variables.
+my_subscription_id=<your_azure_subscription_id>
my_resource_group=<your_azure_resource_group_name_containing_your_azure_linux_vm> my_linux_vmss=<your_azure_linux_vmss_name> my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnostic_data>
my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnos
az login # Select the subscription that contains the storage account.
-az account set --subscription <your_azure_subscription_id>
+az account set --subscription $my_subscription_id
# Enable system-assigned identity on the existing virtual machine scale set.
-az vmss identity assign -g $my_resource_group -n $my_linux_vmss
+az vmss identity assign --resource-group $my_resource_group --name $my_linux_vmss
-# Download the sample public settings. (You could also use curl or any web browser.)
+# Download the sample public settings. You could also use curl or any web browser.
wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json # Build the virtual machine scale set resource ID. Replace the storage account name and resource ID in the public settings.
-my_vmss_resource_id=$(az vmss show -g $my_resource_group -n $my_linux_vmss --query "id" -o tsv)
+my_vmss_resource_id=$(az vmss show --resource-group $my_resource_group \
+ --name $my_linux_vmss --query "id" -o tsv)
sed -i "s#__DIAGNOSTIC_STORAGE_ACCOUNT__#$my_diagnostic_storage_account#g" portal_public_settings.json sed -i "s#__VM_RESOURCE_ID__#$my_vmss_resource_id#g" portal_public_settings.json # Build the protected settings (storage account SAS token).
-my_diagnostic_storage_account_sastoken=$(az storage account generate-sas --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z --permissions wlacu --resource-types co --services bt -o tsv)
+my_diagnostic_storage_account_sastoken=$(az storage account generate-sas \
+ --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z \
+ --permissions wlacu --resource-types co --services bt -o tsv)
my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}" # Finally, tell Azure to install and enable the extension.
-az vmss extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group $my_resource_group --vmss-name $my_linux_vmss --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
+az vmss extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic
+ --version 4.0 --resource-group $my_resource_group --vmss-name $my_linux_vmss \
+ --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
``` ### Update the extension settings
-After you change your protected or public settings, deploy them to the VM by running the same command. If any settings changed, the updates are sent to the extension. LAD reloads the configuration and restarts itself.
+After you change your protected or public settings, run the same command to deploy them to the VM. If any settings changed, the updates are sent to the extension. LAD reloads the configuration and restarts itself.
### Migrate from previous versions of the extension
-The latest version of the extension is 4.0, *which is currently in public preview*. Older versions of 3.x are still supported. But 2.x versions have been deprecated since July 31, 2018.
+The latest version of the extension is 4.0, *which is currently in public preview*. Older versions of 3.x are still supported. 2.x versions have been deprecated since July 31, 2018.
> [!IMPORTANT] > To migrate from 3.x to the newest version of the extension, uninstall the old extension. Then install version 4, which includes the updated configuration for system-assigned identity and sinks for sending metrics to the Azure Monitor sink. When you install the new extension, enable automatic minor version upgrades:
-* On Azure Resource Manager deployment model VMs, include `"autoUpgradeMinorVersion": true` in the VM deployment template.
-* On classic deployment model VMs, specify version `4.*` if you're installing the extension through the Azure CLI or PowerShell.
+- On Azure Resource Manager deployment model VMs, include `"autoUpgradeMinorVersion": true` in the VM deployment template.
+- On classic deployment model VMs, specify version `4.*` if you're installing the extension through the Azure CLI or PowerShell.
You can use the same storage account you used for LAD 3.x. ## Protected settings
-This set of configuration information contains sensitive information that should be protected from public view. It contains, for example, storage credentials. The settings are transmitted to and stored by the extension in encrypted form.
+This set of configuration information contains sensitive information that should be protected from public view. It contains, for example, storage credentials. The settings are transmitted to the extension, which stores them in encrypted form.
```json {
This set of configuration information contains sensitive information that should
} ```
-Name | Value
-- | --
-storageAccountName | The name of the storage account in which the extension writes data.
-storageAccountEndPoint | (Optional) The endpoint that identifies the cloud in which the storage account exists. If this setting is absent, by default, LAD uses the Azure public cloud, `https://core.windows.net`. To use a storage account in Azure Germany, Azure Government, or Azure China 21Vianet, set this value as required.
-storageAccountSasToken | An [Account SAS token](https://azure.microsoft.com/blog/sas-update-account-sas-now-supports-all-storage-services/) for blob and table services (`ss='bt'`). This token applies to containers and objects (`srt='co'`). It grants add, create, list, update, and write permissions (`sp='acluw'`). Do *not* include the leading question-mark (?).
-mdsdHttpProxy | (Optional) HTTP proxy information the extension needs to connect to the specified storage account and endpoint.
-sinksConfig | (Optional) Details of alternative destinations to which metrics and events can be delivered. The following sections provide details about each data sink the extension supports.
+| Name | Value |
+| - | -- |
+| storageAccountName | The name of the storage account in which the extension writes data. |
+| storageAccountEndPoint | (Optional) The endpoint that identifies the cloud in which the storage account exists. If this setting is absent, by default, LAD uses the Azure public cloud, `https://core.windows.net`. To use a storage account in Azure Germany, Azure Government, or Azure China 21Vianet, set this value as required. |
+| storageAccountSasToken | An [Account SAS token](https://azure.microsoft.com/blog/sas-update-account-sas-now-supports-all-storage-services/) for blob and table services (`ss='bt'`). This token applies to containers and objects (`srt='co'`). It grants add, create, list, update, and write permissions (`sp='acluw'`). Do *not* include the leading question-mark (?). |
+| mdsdHttpProxy | (Optional) HTTP proxy information the extension needs to connect to the specified storage account and endpoint. |
+| sinksConfig | (Optional) Details of alternative destinations to which metrics and events can be delivered. The following sections provide details about each data sink the extension supports. |
To get a SAS token within an ARM template, use the `listAccountSas` function. For an example template, see [List function example](../../azure-resource-manager/templates/template-functions-resource.md#list-example).
-You can construct the required SAS token through the Azure portal:
+You can construct the required shared access signature token through the Azure portal:
1. Select the general-purpose storage account to which you want the extension to write.
-1. In the menu on the left, under **Settings**, select **Shared access signature**.
+1. In the menu on the left, under **Security + networking**, select **Shared access signature**.
1. Make the selections as previously described.
-1. Select **Generate SAS**.
+1. Select **Generate SAS and connection string**.
-Copy the generated SAS into the `storageAccountSasToken` field. Remove the leading question mark (?).
+Copy the generated shared access signature into the `storageAccountSasToken` field. Remove the leading question mark (?).
### sinksConfig
Copy the generated SAS into the `storageAccountSasToken` field. Remove the leadi
}, ```
-The `sinksConfig` optional section defines more destinations to which the extension will send collected information. The `"sink"` array contains an object for each additional data sink. The `"type"` attribute determines the other attributes in the object.
+The `sinksConfig` optional section defines more destinations to which the extension sends collected information. The `"sink"` array contains an object for each extra data sink. The `"type"` attribute determines the other attributes in the object.
-Element | Value
-- | --
-name | A string used to refer to this sink elsewhere in the extension configuration.
-type | The type of sink being defined. Determines the other values (if any) in instances of this type.
+| Element | Value |
+| - | -- |
+| name | A string used to refer to this sink elsewhere in the extension configuration. |
+| type | The type of sink being defined. Determines the other values, if any, in instances of this type. |
The Linux diagnostic extension 4.0 supports two protected sink types: `EventHub` and `JsonBlob`.
The Linux diagnostic extension 4.0 supports two protected sink types: `EventHub`
] ```
-The `"sasURL"` entry contains the full URL, including the SAS token, for the event hub to which data should be published. LAD requires a SAS to name a policy that enables the send claim. Here's an example:
+The `"sasURL"` entry contains the full URL, including the shared access signature token, for the event hub to which data should be published. LAD requires a shared access signature to name a policy that enables the send claim. Here's an example:
-* Create an Event Hubs namespace called `contosohub`.
-* Create an event hub in the namespace called `syslogmsgs`.
-* Create a shared access policy on the event hub named `writer` that enables the send claim.
+- Create an Event Hubs namespace called `contosohub`.
+- Create an event hub in the namespace called `syslogmsgs`.
+- Create a shared access policy on the event hub named `writer` that enables the send claim.
-If you created a SAS that's good until midnight UTC on January 1, 2018, the `sasURL` value might be like the following example.
+If you create a SAS that's good until midnight UTC on January 1, 2018, the `sasURL` value might be like the following example.
```https https://contosohub.servicebus.windows.net/syslogmsgs?sr=contosohub.servicebus.windows.net%2fsyslogmsgs&sig=xxxxxxxxxxxxxxxxxxxxxxxxx&se=1514764800&skn=writer
For more information about generating and retrieving information on SAS tokens f
Data directed to a `JsonBlob` sink is stored in blobs in Azure Storage. Each instance of LAD creates a blob every hour for each sink name. Each blob always contains a syntactically valid JSON array of objects. New entries are atomically added to the array.
-Blobs are stored in a container that has the same name as the sink. The Azure Storage rules for blob container names apply to the names of `JsonBlob` sinks. That is, names must have between 3 and 63 lowercase alphanumeric ASCII characters or dashes.
+Blobs are stored in a container that has the same name as the sink. The Azure Storage rules for blob container names apply to the names of `JsonBlob` sinks. Names must have between 3 and 63 lowercase alphanumeric ASCII characters or dashes.
## Public settings
The public settings structure contains various blocks of settings that control t
} ```
-Element | Value
-- | --
-StorageAccount | The name of the storage account in which the extension writes data. Must be the name specified in the [protected settings](#protected-settings).
-mdsdHttpProxy | (Optional) The proxy specified in the [protected settings](#protected-settings). If the private value is set, it overrides the public value. Put proxy settings that contain a secret, such as a password, in the [protected settings](#protected-settings).
+| Element | Value |
+| - | -- |
+| StorageAccount | The name of the storage account in which the extension writes data. Must be the name specified in the [protected settings](#protected-settings). |
+| mdsdHttpProxy | (Optional) The proxy specified in the [protected settings](#protected-settings). If the private value is set, it overrides the public value. Put proxy settings that contain a secret, such as a password, in the [protected settings](#protected-settings). |
The following sections provide details about the remaining elements.
The following sections provide details about the remaining elements.
The `ladCfg` structure controls the gathering of metrics and logs for delivery to the Azure Monitor Metrics service and to other data sinks. Specify either `performanceCounters` or `syslogEvents` or both. Also specify the `metrics` structure.
-If you don't want to enable syslog or metrics collection, specify an empty structure for the `ladCfg` element, like so:
+If you don't want to enable syslog or metrics collection, specify an empty structure for the `ladCfg` element:
```json "ladCfg": {
If you don't want to enable syslog or metrics collection, specify an empty struc
} ```
-Element | Value
-- | --
-eventVolume | (Optional) Controls the number of partitions created within the storage table. The value must be `"Large"`, `"Medium"`, or `"Small"`. If the value isn't specified, the default value is `"Medium"`.
-sampleRateInSeconds | (Optional) The default interval between the collection of raw (unaggregated) metrics. The smallest supported sample rate is 15 seconds. If the value isn't specified, the default is `15`.
+| Element | Value |
+| - | -- |
+| eventVolume | (Optional) Controls the number of partitions created within the storage table. The value must be `"Large"`, `"Medium"`, or `"Small"`. The default value is `"Medium"`. |
+| sampleRateInSeconds | (Optional) The default interval between the collection of raw, that is, unaggregated, metrics. The smallest supported sample rate is 15 seconds. The default is `15`. |
#### metrics
sampleRateInSeconds | (Optional) The default interval between the collection of
} ```
-Element | Value
-- | --
-resourceId | The Azure Resource Manager resource ID of the VM or of the virtual machine scale set to which the VM belongs. Also specify this setting if the configuration uses any `JsonBlob` sink.
-scheduledTransferPeriod | The frequency at which aggregate metrics are computed and transferred to Azure Monitor Metrics. The frequency is expressed as an IS 8601 time interval. The smallest transfer period is 60 seconds, that is, PT1M. Specify at least one `scheduledTransferPeriod`.
+| Element | Value |
+| - | -- |
+| resourceId | The Azure Resource Manager resource ID of the VM or of the Virtual Machine Scale Set to which the VM belongs. Also specify this setting if the configuration uses any `JsonBlob` sink. |
+| scheduledTransferPeriod | The frequency at which aggregate metrics are computed and transferred to Azure Monitor Metrics. The frequency is expressed as an IS 8601 time interval. The smallest transfer period is 60 seconds, that is, PT1M. Specify at least one `scheduledTransferPeriod`. |
Samples of the metrics specified in the `performanceCounters` section are collected every 15 seconds or at the sample rate explicitly defined for the counter. If multiple `scheduledTransferPeriod` frequencies appear, as in the example, each aggregation is computed independently.
Samples of the metrics specified in the `performanceCounters` section are collec
} ```
-The `performanceCounters` optional section controls the collection of metrics. Raw samples are aggregated for each [`scheduledTransferPeriod`](#metrics) to produce these values:
+The `performanceCounters` optional section controls the collection of metrics. Raw samples are aggregated for each [scheduledTransferPeriod](#metrics) to produce these values:
-* Mean
-* Minimum
-* Maximum
-* Last-collected value
-* Count of raw samples used to compute the aggregate
+- Mean
+- Minimum
+- Maximum
+- Last-collected value
+- Count of raw samples used to compute the aggregate
-Element | Value
-- | --
-sinks | (Optional) A comma-separated list of names of sinks to which LAD sends aggregated metric results. All aggregated metrics are published to each listed sink. Example: `"MyEventHubSink, MyJsonSink, MyAzMonSink"`. For more information, see [`sinksConfig` (protected settings)](#sinksconfig) and [`sinksConfig` (public settings)](#sinksconfig-1).
-type | Identifies the actual provider of the metric.
-class | Together with `"counter"`, identifies the specific metric within the provider's namespace.
-counter | Together with `"class"`, identifies the specific metric within the provider's namespace. See a list of available counters [below](#metrics-supported-by-the-builtin-provider).
-counterSpecifier | Identifies the specific metric within the Azure Monitor Metrics namespace.
-condition | (Optional) Selects an instance of the object to which the metric applies. Or selects the aggregation across all instances of that object.
-sampleRate | The IS 8601 interval that sets the rate at which raw samples for this metric are collected. If the value isn't set, the collection interval is set by the value of [`sampleRateInSeconds`](#ladcfg). The shortest supported sample rate is 15 seconds (PT15S).
-unit | Defines the unit for the metric. Should be one of these strings: `"Count"`, `"Bytes"`, `"Seconds"`, `"Percent"`, `"CountPerSecond"`, `"BytesPerSecond"`, `"Millisecond"`. Consumers of the collected data expect the collected data values to match this unit. LAD ignores this field.
-displayName | The label to be attached to the data in Azure Monitor Metrics when viewing in the `Guest (classic)` metrics namespace. This label is in the language specified by the associated locale setting. LAD ignores this field.<br/>**Note**: if viewing the same metric in the `azure.vm.linux.guestmetrics` Metrics Namespace (available if `AzMonSink` is configured) the display name depends entirely on the counter. See the [tables below](#metrics-supported-by-the-builtin-provider) to find the mapping between counters and names.
+| Element | Value |
+| - | -- |
+| sinks | (Optional) A comma-separated list of names of sinks to which LAD sends aggregated metric results. All aggregated metrics are published to each listed sink. For example, `"MyEventHubSink, MyJsonSink, MyAzMonSink"`. For more information, see [`sinksConfig` (protected settings)](#sinksconfig) and [`sinksConfig` (public settings)](#sinksconfig-1). |
+| type | Identifies the actual provider of the metric. |
+| class | Together with `"counter"`, identifies the specific metric within the provider's namespace. |
+| counter | Together with `"class"`, identifies the specific metric within the provider's namespace. [See a list of available counters](#metrics-supported-by-the-builtin-provider). |
+| counterSpecifier | Identifies the metric within the Azure Monitor Metrics namespace. |
+| condition | (Optional) Selects an instance of the object to which the metric applies. Or selects the aggregation across all instances of that object. |
+| sampleRate | The IS 8601 interval that sets the rate at which raw samples for this metric are collected. If the value isn't set, the value of [`sampleRateInSeconds`](#ladcfg) sets the collection interval. The shortest supported sample rate is 15 seconds (PT15S). |
+| unit | Defines the unit for the metric. Should be one of these strings: `"Count"`, `"Bytes"`, `"Seconds"`, `"Percent"`, `"CountPerSecond"`, `"BytesPerSecond"`, `"Millisecond"`. Consumers of the collected data expect the collected data values to match this unit. LAD ignores this field. |
+| displayName | The label to attach to the data in Azure Monitor Metrics when viewing in the `Guest (classic)` metrics namespace. This label is in the language specified by the associated locale setting. LAD ignores this field. **Note**: If viewing the same metric in the `azure.vm.linux.guestmetrics` Metrics Namespace, which is available if `AzMonSink` is configured, the display name depends entirely on the counter. To find the mapping between counters and names, see [Metrics supported by the builtin provider](#metrics-supported-by-the-builtin-provider). |
-The `counterSpecifier` is an arbitrary identifier. Consumers of metrics, like the Azure portal charting and alerting feature, use `counterSpecifier` as the "key" that identifies a metric or an instance of a metric.
+The `counterSpecifier` is an arbitrary identifier. Consumers of metrics, like the Azure portal charting and alerting feature, use `counterSpecifier` as the key that identifies a metric or an instance of a metric.
-For `builtin` metrics, we recommend `counterSpecifier` values that begin with `/builtin/`. If you're collecting a specific instance of a metric, attach the identifier of the instance to the `counterSpecifier` value. Here are some examples:
+For `builtin` metrics, we recommend `counterSpecifier` values that begin with `/builtin/`. To collect a specific instance of a metric, attach the identifier of the instance to the `counterSpecifier` value. Here are some examples:
-* `/builtin/Processor/PercentIdleTime` - Idle time averaged across all vCPUs
-* `/builtin/Disk/FreeSpace(/mnt)` - Free space for the `/mnt` file system
-* `/builtin/Disk/FreeSpace` - Free space averaged across all mounted file systems
+- `/builtin/Processor/PercentIdleTime`. Idle time averaged across all vCPUs
+- `/builtin/Disk/FreeSpace(/mnt)`. Free space for the `/mnt` file system
+- `/builtin/Disk/FreeSpace`. Free space averaged across all mounted file systems
LAD and the Azure portal don't expect the `counterSpecifier` value to match any pattern. Be consistent in how you construct `counterSpecifier` values.
-When you specify `performanceCounters`, LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs or Event Hubs or both. But you can't disable storing data to a table.
+When you specify `performanceCounters`, LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs or Event Hubs or both. You can't disable storing data to a table.
All instances of LAD that use the same storage account name and endpoint add their metrics and logs to the same table. If too many VMs write to the same table partition, Azure can throttle writes to that partition.
The `eventVolume` setting causes entries to be spread across 1 (small), 10 (medi
The Azure Monitor Metrics feature of the Azure portal uses the data in this table to produce graphs or to trigger alerts. The table name is the concatenation of these strings:
-* `WADMetrics`
-* The `"scheduledTransferPeriod"` for the aggregated values stored in the table
-* `P10DV2S`
-* A date, in the form "YYYYMMDD", which changes every 10 days
+- `WADMetrics`
+- The `"scheduledTransferPeriod"` for the aggregated values stored in the table
+- `P10DV2S`
+- A date, in the form *YYYYMMDD*, which changes every 10 days
Examples include `WADMetricsPT1HP10DV2S20170410` and `WADMetricsPT1MP10DV2S20170609`.
The `syslogEvents` optional section controls the collection of log events from s
The `syslogEventConfiguration` collection has one entry for each syslog facility of interest. If `minSeverity` is `"NONE"` for a particular facility, or if that facility doesn't appear in the element at all, no events from that facility are captured.
-Element | Value
-- | --
+| Element | Value |
+| - | --
sinks | A comma-separated list of names of sinks to which individual log events are published. All log events that match the restrictions in `syslogEventConfiguration` are published to each listed sink. Example: `"EHforsyslog"`
-facilityName | A syslog facility name, such as `"LOG\_USER"` or `"LOG\_LOCAL0"`. For more information, see the "facility" section of the [syslog man page](http://man7.org/linux/man-pages/man3/syslog.3.html).
-minSeverity | A syslog severity level, such as `"LOG\_ERR"` or `"LOG\_INFO"`. For more information, see the "level" section of the [syslog man page](http://man7.org/linux/man-pages/man3/syslog.3.html). The extension captures events sent to the facility at or above the specified level.
+facilityName | A syslog facility name, such as `"LOG_USER"` or `"LOG_LOCAL0"`. For more information, see *Values for facility* in the [syslog manual page](http://man7.org/linux/man-pages/man3/syslog.3.html).
+minSeverity | A syslog severity level, such as `"LOG_ERR"` or `"LOG_INFO"`. For more information, see *Values for level* in the [syslog manual page](http://man7.org/linux/man-pages/man3/syslog.3.html). The extension captures events sent to the facility at or above the specified level.
-When you specify `syslogEvents`, LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs or Event Hubs or both. But you can't disable storing data to a table.
+When you specify `syslogEvents`, LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs or Event Hubs or both. You can't disable storing data to a table.
The partitioning behavior for this table is the same as described for `performanceCounters`. The table name is the concatenation of these strings:
-* `LinuxSyslog`
-* A date, in the form "YYYYMMDD", which changes every 10 days
+- `LinuxSyslog`
+- A date, in the form *YYYYMMDD*, which changes every 10 days
Examples include `LinuxSyslog20170410` and `LinuxSyslog20170609`. ### sinksConfig
-The optional public `sinksConfig` section enables sending metrics to the Azure Monitor sink in addition to the Storage account and the default Guest Metrics blade.
+The optional public `sinksConfig` section enables sending metrics to the Azure Monitor sink in addition to the Storage account and the default Guest Metrics view.
> [!NOTE] > Both public and protected settings have an optional `sinksConfig` section. The `sinksConfig` section in the *public* settings only holds the `AzMonSink` sink configuration. `EventHub` and `JsonBlob` sink configurations **cannot** be included in your public settings.
The optional public `sinksConfig` section enables sending metrics to the Azure M
### fileLogs
-The `fileLogs` section controls the capture of log files. LAD captures new text lines as they're written to the file. It writes them to table rows and/or any specified sinks, such as `JsonBlob` and `EventHub`.
+The `fileLogs` section controls the capture of log files. LAD captures new text lines as they're written to the file. It writes them to table rows and any specified sinks, such as `JsonBlob` and `EventHub`.
> [!NOTE]
-> The `fileLogs` are captured by a subcomponent of LAD called `omsagent`. To collect `fileLogs`, ensure that the `omsagent` user has read permissions on the files you specify. It must also have execute permissions on all directories in the path to that file. After LAD is installed, you can check permissions by running `sudo su omsagent -c 'cat /path/to/file'`.
+> The `fileLogs` are captured by a subcomponent of LAD called `omsagent`. To collect `fileLogs`, ensure that the `omsagent` user has read permissions on the files you specify. It must also have execute permissions on all directories in the path to that file. After LAD is installed, to check permissions, run `sudo su omsagent -c 'cat /path/to/file'`.
```json "fileLogs": [
The `fileLogs` section controls the capture of log files. LAD captures new text
] ```
-Element | Value
-- | --
-file | The full path name of the log file to be watched and captured. The path name is for a single file. It can't name a directory or contain wildcard characters. The `omsagent` user account must have read access to the file path.
-table | (Optional) The Azure Storage table into which new lines from the "tail" of the file are written. The table must be in the designated storage account, as specified in the protected configuration.
-sinks | (Optional) A comma-separated list of names of more sinks to which log lines are sent.
+| Element | Value |
+| - | -- |
+| file | The full path of the log file to be watched and captured. The path can't specify a directory or contain wildcard characters. The `omsagent` user account must have read access to the file path. |
+| table | (Optional) The Azure Storage Table into which new lines from the tail of the file are written. The table must be in the designated storage account, as specified in the protected configuration. |
+| sinks | (Optional) A comma-separated list of names of more sinks to which log lines are sent. |
Either `"table"` or `"sinks"` or both must be specified. ## Metrics supported by the builtin provider
-> [!NOTE]
-> The default metrics that LAD supports are aggregated across all file systems, disks, or names. For nonaggregated metrics, refer to the newer Azure Monitor sink metrics support.
+The default metrics that LAD supports are aggregated across all file systems, disks, or names. For nonaggregated metrics, refer to the newer Azure Monitor sink metrics support.
> [!NOTE]
-> The display names for each metric will differ depending on the metrics namespace to which it belongs:
-> * `Guest (classic)` (populated from your storage account): the specified `displayName` in the `performanceCounters` section, or the default display name as seen in Azure Portal (VM > Diagnostic settings > Metrics > Custom).
-> * `azure.vm.linux.guestmetrics` (populated from `AzMonSink` if configured): the "`azure.vm.linux.guestmetrics` Display Name" specified in the tables below.
+> The display names for each metric differ depending on the metrics namespace to which it belongs:
+>
+> - `Guest (classic)` populated from your storage account: the specified `displayName` in the `performanceCounters` section, or the default display name as seen in Azure Portal. For the VM, under **Monitoring** > **Diagnostic settings**, select **Metrics** tab.
+> - `azure.vm.linux.guestmetrics` populated from `AzMonSink`, if configured: the "`azure.vm.linux.guestmetrics` Display Name" specified in the following tables.
>
-> Due to implementation details, the metric values between `Guest (classic)` and `azure.vm.linux.guestmetrics` versions will differ. While the classic metrics had certain aggregations applied in the agent, the new metrics are unaggregated counters, giving customers the flexibility to aggregate as desired at viewing/alerting time.
+> The metric values between `Guest (classic)` and `azure.vm.linux.guestmetrics` versions differ. While the classic metrics had certain aggregations applied in the agent, the new metrics are unaggregated counters, giving customers the flexibility to aggregate as desired at viewing/alerting time.
The `builtin` metric provider is a source of metrics that are the most interesting to a broad set of users. These metrics fall into five broad classes:
-* Processor
-* Memory
-* Network
-* File system
-* Disk
+- Processor
+- Memory
+- Network
+- File system
+- Disk
### builtin metrics for the Processor class
The Processor class of metrics provides information about processor usage in the
In a two-vCPU VM, if one vCPU is 100 percent busy and the other is 100 percent idle, the reported `PercentIdleTime` is 50. If each vCPU is 50 percent busy for the same period, the reported result is also 50. In a four-vCPU VM, when one vCPU is 100 percent busy and the others are idle, the reported `PercentIdleTime` is 75.
-Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning
- | - | -
-`PercentIdleTime` | `cpu idle time` | Percentage of time during the aggregation window that processors ran the kernel idle loop
-`PercentProcessorTime` | `cpu percentage guest os` | Percentage of time running a non-idle thread
-`PercentIOWaitTime` | `cpu io wait time` | Percentage of time waiting for IO operations to finish
-`PercentInterruptTime` | `cpu interrupt time` | Percentage of time running hardware or software interrupts and DPCs (deferred procedure calls)
-`PercentUserTime` | `cpu user time` | Of non-idle time during the aggregation window, the percentage of time spent in user mode at normal priority
-`PercentNiceTime` | `cpu nice time` | Of non-idle time, the percentage spent at lowered (nice) priority
-`PercentPrivilegedTime` | `cpu privileged time` | Of non-idle time, the percentage spent in privileged (kernel) mode
+| Counter | azure.vm.linux.guestmetrics Display Name | Meaning |
+| | - | -
+| PercentIdleTime | `cpu idle time` | Percentage of time during the aggregation window that processors ran the kernel idle loop |
+| PercentProcessorTime | `cpu percentage guest os` | Percentage of time running a not idle thread |
+| PercentIOWaitTime | `cpu io wait time` | Percentage of time waiting for I/O operations to finish |
+| PercentInterruptTime | `cpu interrupt time` | Percentage of time running hardware or software interrupts and deferred procedure calls (DPCs) |
+| PercentUserTime | `cpu user time` | Of not idle time during the aggregation window, the percentage of time spent in user mode at normal priority |
+| PercentNiceTime | `cpu nice time` | Of not idle time, the percentage spent at lowered (nice) priority |
+| PercentPrivilegedTime | `cpu privileged time` | Of not idle time, the percentage spent in privileged (kernel) mode |
The first four counters should sum to 100 percent. The last three counters also sum to 100 percent. These three counters subdivide the sum of `PercentProcessorTime`, `PercentIOWaitTime`, and `PercentInterruptTime`.
The first four counters should sum to 100 percent. The last three counters also
The Memory class of metrics provides information about memory use, paging, and swapping.
-Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning
- | - | -
-`AvailableMemory` | `memory available` | Available physical memory in MiB
-`PercentAvailableMemory` | `mem. percent available` | Available physical memory as a percentage of total memory
-`UsedMemory` | `memory used` | In-use physical memory (MiB)
-`PercentUsedMemory` | `memory percentage` | In-use physical memory as a percentage of total memory
-`PagesPerSec` | `pages` | Total paging (read/write)
-`PagesReadPerSec` | `page reads` | Pages read from the backing store, such as swap file, program file, and mapped file
-`PagesWrittenPerSec` | `page writes` | Pages written to the backing store, such as swap file and mapped file
-`AvailableSwap` | `swap available` | Unused swap space (MiB)
-`PercentAvailableSwap` | `swap percent available` | Unused swap space as a percentage of the total swap
-`UsedSwap` | `swap used` | In-use swap space (MiB)
-`PercentUsedSwap` | `swap percent used` | In-use swap space as a percentage of the total swap
+| Counter | azure.vm.linux.guestmetrics Display Name | Meaning |
+| | - | - |
+| AvailableMemory | `memory available` | Available physical memory in MiB |
+| PercentAvailableMemory | `mem. percent available` | Available physical memory as a percentage of total memory |
+| UsedMemory | `memory used` | In-use physical memory (MiB) |
+| PercentUsedMemory | `memory percentage` | In-use physical memory as a percentage of total memory |
+| PagesPerSec | `pages` | Total paging (read/write) |
+| PagesReadPerSec | `page reads` | Pages read from the backing store, such as swap file, program file, and mapped file |
+| PagesWrittenPerSec | `page writes` | Pages written to the backing store, such as swap file and mapped file |
+| AvailableSwap | `swap available` | Unused swap space (MiB) |
+| PercentAvailableSwap | `swap percent available` | Unused swap space as a percentage of the total swap |
+| UsedSwap | `swap used` | In-use swap space (MiB) |
+| PercentUsedSwap | `swap percent used` | In-use swap space as a percentage of the total swap |
This class of metrics has only one instance. The `"condition"` attribute has no useful settings and should be omitted.
The Network class of metrics provides information about network activity on an i
LAD doesn't expose bandwidth metrics. You can get these metrics from host metrics.
-Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning
- | - | -
-`BytesTransmitted` | `network out guest os` | Total bytes sent since startup
-`BytesReceived` | `network in guest os` | Total bytes received since startup
-`BytesTotal` | `network total bytes` | Total bytes sent or received since startup
-`PacketsTransmitted` | `packets sent` | Total packets sent since startup
-`PacketsReceived` | `packets received` | Total packets received since startup
-`TotalRxErrors` | `packets received errors` | Number of receive errors since startup
-`TotalTxErrors` | `packets sent errors` | Number of transmit errors since startup
-`TotalCollisions` | `network collisions` | Number of collisions reported by the network ports since startup
+| Counter | azure.vm.linux.guestmetrics Display Name | Meaning |
+| | - | - |
+| BytesTransmitted | `network out guest os` | Total bytes sent since startup |
+| BytesReceived | `network in guest os` | Total bytes received since startup |
+| BytesTotal | `network total bytes` | Total bytes sent or received since startup |
+| PacketsTransmitted | `packets sent` | Total packets sent since startup |
+| PacketsReceived | `packets received` | Total packets received since startup |
+| TotalRxErrors | `packets received errors` | Number of receive errors since startup |
+| TotalTxErrors | `packets sent errors` | Number of transmit errors since startup |
+| TotalCollisions | `network collisions` | Number of collisions reported by the network ports since startup |
### builtin metrics for the File system class
-The File system class of metrics provides information about file system usage. Absolute and percentage values are reported as they would be displayed to an ordinary user (not root).
-
-Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning
- | - | -
-`FreeSpace` | `filesystem free space` | Available disk space in bytes
-`UsedSpace` | `filesystem used space` | Used disk space in bytes
-`PercentFreeSpace` | `filesystem % free space` | Percentage of free space
-`PercentUsedSpace` | `filesystem % used space` | Percentage of used space
-`PercentFreeInodes` | `filesystem % free inodes` | Percentage of unused index nodes (inodes)
-`PercentUsedInodes` | `filesystem % used inodes` | Percentage of allocated (in use) inodes summed across all file systems
-`BytesReadPerSecond` | `filesystem read bytes/sec` | Bytes read per second
-`BytesWrittenPerSecond` | `filesystem write bytes/sec` | Bytes written per second
-`BytesPerSecond` | `filesystem bytes/sec` | Bytes read or written per second
-`ReadsPerSecond` | `filesystem reads/sec` | Read operations per second
-`WritesPerSecond` | `filesystem writes/sec` | Write operations per second
-`TransfersPerSecond` | `filesystem transfers/sec` | Read or write operations per second
+The File system class of metrics provides information about file system usage. Absolute and percentage values are reported as they would be displayed to an ordinary user, not root.
+
+| Counter | azure.vm.linux.guestmetrics Display Name | Meaning |
+| | - | - |
+| FreeSpace | `filesystem free space` | Available disk space in bytes |
+| UsedSpace | `filesystem used space` | Used disk space in bytes |
+| PercentFreeSpace | `filesystem % free space` | Percentage of free space |
+| PercentUsedSpace | `filesystem % used space` | Percentage of used space |
+| PercentFreeInodes | `filesystem % free inodes` | Percentage of unused index nodes (inodes) |
+| PercentUsedInodes | `filesystem % used inodes` | Percentage of allocated (in use) inodes summed across all file systems |
+| BytesReadPerSecond | `filesystem read bytes/sec` | Bytes read per second |
+| BytesWrittenPerSecond | `filesystem write bytes/sec` | Bytes written per second |
+| BytesPerSecond | `filesystem bytes/sec` | Bytes read or written per second |
+| ReadsPerSecond | `filesystem reads/sec` | Read operations per second |
+| WritesPerSecond | `filesystem writes/sec` | Write operations per second |
+| TransfersPerSecond | `filesystem transfers/sec` | Read or write operations per second |
### builtin metrics for the Disk class
The Disk class of metrics provides information about disk device usage. These st
When a device has multiple file systems, the counters for that device are, effectively, aggregated across all file systems.
-Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning
- | - | -
-`ReadsPerSecond` | `disk reads` | Read operations per second
-`WritesPerSecond` | `disk writes` | Write operations per second
-`TransfersPerSecond` | `disk transfers` | Total operations per second
-`AverageReadTime` | `disk read time` | Average seconds per read operation
-`AverageWriteTime` | `disk write time` | Average seconds per write operation
-`AverageTransferTime` | `disk transfer time` | Average seconds per operation
-`AverageDiskQueueLength` | `disk queue length` | Average number of queued disk operations
-`ReadBytesPerSecond` | `disk read guest os` | Number of bytes read per second
-`WriteBytesPerSecond` | `disk write guest os` | Number of bytes written per second
-`BytesPerSecond` | `disk total bytes` | Number of bytes read or written per second
+| Counter | azure.vm.linux.guestmetrics Display Name | Meaning |
+| | - | - |
+| ReadsPerSecond | `disk reads` | Read operations per second |
+| WritesPerSecond | `disk writes` | Write operations per second |
+| TransfersPerSecond | `disk transfers` | Total operations per second |
+| AverageReadTime | `disk read time` | Average seconds per read operation |
+| AverageWriteTime | `disk write time` | Average seconds per write operation |
+| AverageTransferTime | `disk transfer time` | Average seconds per operation |
+| AverageDiskQueueLength | `disk queue length` | Average number of queued disk operations |
+| ReadBytesPerSecond | `disk read guest os` | Number of bytes read per second |
+| WriteBytesPerSecond | `disk write guest os` | Number of bytes written per second |
+| BytesPerSecond | `disk total bytes` | Number of bytes read or written per second |
## Example LAD 4.0 configuration
-Based on the preceding definitions, this section provides a sample LAD 4.0 extension configuration and some explanation. To apply this sample to your case, use your own storage account name, account SAS token, and Event Hubs SAS tokens.
+Based on the preceding definitions, this section provides a sample LAD 4.0 extension configuration and some explanation. To apply this sample, use your own storage account name, account shared access signature token, and Event Hubs SAS tokens.
> [!NOTE]
-> Depending on whether you use the Azure CLI or PowerShell to install LAD, the method for providing public and protected settings differs:
+> Depending on whether you use the Azure CLI or Azure PowerShell to install LAD, the method for providing public and protected settings differs:
>
-> * If you're using the Azure CLI, save the following settings to *ProtectedSettings.json* and *PublicSettings.json* to use the preceding sample command.
-> * If you're using PowerShell, save the following settings to `$protectedSettings` and `$publicSettings` by running `$protectedSettings = '{ ... }'`.
+> - If you're using the Azure CLI, save the following settings to *ProtectedSettings.json* and *PublicSettings.json* to use the preceding sample command.
+> - If you're using PowerShell, run `$protectedSettings = '{ ... }'` and `$publicSettings = '{ ... }'` to save the following settings to `$protectedSettings` and `$publicSettings`.
-### Protected settings
+### Protected settings configuration
The protected settings configure:
-* A storage account.
-* A matching account SAS token.
-* Several sinks (`JsonBlob` or `EventHub` with SAS tokens).
+- A storage account.
+- A matching account shared access signature token.
+- Several sinks: `JsonBlob` or `EventHub` with SAS tokens.
```json {
The protected settings configure:
} ```
-### Public settings
+### Public settings configuration
The public settings cause LAD to:
-* Upload percent-processor-time metrics and used-disk-space metrics to the `WADMetrics*` table,
-* Upload messages from syslog facility `"user"` and severity `"info"` to the `LinuxSyslog*` table.
-* Upload appended lines in file `/var/log/myladtestlog` to the `MyLadTestLog` table.
+- Upload percent-processor-time metrics and used-disk-space metrics to the `WADMetrics*` table.
+- Upload messages from syslog facility `"user"` and severity `"info"` to the `LinuxSyslog*` table.
+- Upload appended lines in file `/var/log/myladtestlog` to the `MyLadTestLog` table.
In each case, data is also uploaded to:
-* Azure Blob Storage. The container name is as defined in the `JsonBlob` sink.
-* An Event Hubs endpoint, as specified in the `EventHub` sink.
+- Azure Blob Storage. The container name is as defined in the `JsonBlob` sink.
+- An Event Hubs endpoint, as specified in the `EventHub` sink.
```json {
In each case, data is also uploaded to:
} ```
-The `resourceId` in the configuration must match that of the VM or the virtual machine scale set.
+The `resourceId` in the configuration must match that of the VM or the Virtual Machine Scale Set.
-* Azure platform metrics charting and alerting knows the `resourceId` of the VM you're working on. It expects to find the data for your VM by using the `resourceId` the lookup key.
-* If you use Azure Autoscale, the `resourceId` in the autoscale configuration must match the `resourceId` that LAD uses.
-* The `resourceId` is built in to the names of JSON blobs written by LAD.
+- Azure platform metrics charting and alerting knows the `resourceId` of the VM you're working on. It expects to find the data for your VM by using the `resourceId` as the lookup key.
+- If you use Azure autoscale, the `resourceId` in the autoscale configuration must match the `resourceId` that LAD uses.
+- The `resourceId` is built in to the names of JSON blobs written by LAD.
## View your data Use the Azure portal to view performance data or set alerts: The `performanceCounters` data is always stored in an Azure Storage table. Azure Storage APIs are available for many languages and platforms.
-Data sent to `JsonBlob` sinks is stored in blobs in the storage account named in the [protected settings](#protected-settings). You can consume the blob data in any Azure Blob Storage APIs.
+Data sent to `JsonBlob` sinks is stored in blobs in the storage account named in the [protected settings](#protected-settings-configuration). You can consume the blob data in any Azure Blob Storage API.
You also can use these UI tools to access the data in Azure Storage:
-* Visual Studio Server Explorer
-* [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)
+- Visual Studio Server Explorer
+- [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)
The following screenshot of an Azure Storage Explorer session shows the generated Azure Storage tables and containers from a correctly configured LAD 4.0 extension on a test VM. The image doesn't exactly match the [sample LAD 4.0 configuration](#example-lad-40-configuration).
For more information about how to consume messages published to an Event Hubs en
## Next steps
-* In [Azure Monitor](../../azure-monitor/alerts/alerts-classic-portal.md), create alerts for the metrics you collect.
-* [Create monitoring charts](../../azure-monitor/data-platform.md) for your metrics.
-* [Create a virtual machine scale set](../linux/tutorial-create-vmss.md) by using your metrics to control autoscaling.
+- In [Azure Monitor](../../azure-monitor/alerts/alerts-classic-portal.md), create alerts for the metrics you collect.
+- [Create monitoring charts](../../azure-monitor/data-platform.md) for your metrics.
+- [Create a Virtual Machine Scale Set](../linux/tutorial-create-vmss.md) by using your metrics to control autoscaling.
virtual-machines Iaas Antimalware Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/iaas-antimalware-windows.md
documentationcenter: '' -+ ms.assetid:
vm-windows Previously updated : 01/25/2023 Last updated : 04/10/2023
Depends on your type of deployment, use the corresponding commands to deploy the
* [Azure Resource Manager based Virtual Machine](../../security/fundamentals/antimalware-code-samples.md#enable-and-configure-microsoft-antimalware-for-azure-resource-manager-vms) * [Azure Service Fabric Clusters](../../security/fundamentals/antimalware-code-samples.md#add-microsoft-antimalware-to-azure-service-fabric-clusters)
- * [Classic Cloud Service](/powershell/module/servicemanagement/azure.service/set-azureserviceextension)
+ * [Azure Arc-enabled servers](../../security/fundamentals/antimalware-code-samples.md#add-microsoft-antimalware-for-azure-arc-enabled-servers) ## Troubleshoot and support
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
Sysprep removes all your personal account and security information, and then pre
Make sure the server roles running on the machine are supported by Sysprep. For more information, see [Sysprep support for server roles](/windows-hardware/manufacture/desktop/sysprep-support-for-server-roles) and [Unsupported scenarios](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview#unsupported-scenarios). > [!IMPORTANT]
-> After you have run Sysprep on a VM, that VM is considered *generalized* and cannot be restarted. The process of generalizing a VM is not reversible. If you need to keep the original VM functioning, you should create a snapshot of the OS disk, create a VM from the snapshot, and then and generalize that copy of the VM
+> After you have run Sysprep on a VM, that VM is considered *generalized* and cannot be restarted. The process of generalizing a VM is not reversible. If you need to keep the original VM functioning, you should create a snapshot of the OS disk, create a VM from the snapshot, and then generalize that copy of the VM.
> > Sysprep requires the drives to be fully decrypted. If you have enabled encryption on your VM, disable encryption before you run Sysprep. >
virtual-machines Attach Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/attach-disk-portal.md
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
The output is similar to the following example:
-```bash
+```output
sda 0:0:0:0 30G Γö£ΓöÇsda1 29.9G / Γö£ΓöÇsda14 4M
In the image, you can see that there are 3 data disks: 4 GB on LUN 0, 16GB at LU
Here's what that might look like using `lsblk`:
-```bash
+```output
sda 0:0:0:0 30G Γö£ΓöÇsda1 29.9G / Γö£ΓöÇsda14 4M
sudo blkid
The output looks similar to the following example:
-```bash
+```output
/dev/sda1: LABEL="cloudimg-rootfs" UUID="11111111-1b1b-1c1c-1d1d-1e1e1e1e1e1e" TYPE="ext4" PARTUUID="1a1b1c1d-11aa-1234-1a1a1a1a1a1a" /dev/sda15: LABEL="UEFI" UUID="BCD7-96A6" TYPE="vfat" PARTUUID="1e1g1cg1h-11aa-1234-1u1u1a1a1u1u" /dev/sdb1: UUID="22222222-2b2b-2c2c-2d2d-2e2e2e2e2e2e" TYPE="ext4" TYPE="ext4" PARTUUID="1a2b3c4d-01"
The output looks similar to the following example:
Next, open the **/etc/fstab** file in a text editor. Add a line to the end of the file, using the UUID value for the `/dev/sdc1` device that was created in the previous steps, and the mountpoint of `/datadrive`. Using the example from this article, the new line would look like the following:
-```bash
+```config
UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive xfs defaults,nofail 1 2 ```
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
The output will look something like this:
-```bash
+```output
sda 0:0:0:0 30G Γö£ΓöÇsda1 29.9G / Γö£ΓöÇsda14 4M
There are two ways to enable TRIM support in your Linux VM. As usual, consult yo
* Use the `discard` mount option in */etc/fstab*, for example:
- ```bash
+ ```config
UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive xfs defaults,discard 1 2 ``` * In some cases, the `discard` option may have performance implications. Alternatively, you can run the `fstrim` command manually from the command line, or add it to your crontab to run regularly:
sudo fstrim /datadrive
# [SUSE](#tab/suse) ```bash
+sudo zypper install util-linux
sudo fstrim /datadrive ```
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-centos.md
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.1", "/mnt/resource"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"] EOF ```
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.1", "/mnt/resource"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"] EOF ```
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/oracle-create-upload-vhd.md
Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.1", "/mnt/resource"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"] EOF ```
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
This section assumes that you've already obtained an ISO file from the Red Hat w
```bash if [[ -f /mnt/resource/swapfile ]]; then
- echo "Removing swapfile" #RHEL uses a swapfile by defaul
+ echo "Removing swapfile" #RHEL uses a swapfile by default
swapoff /mnt/resource/swapfile rm /mnt/resource/swapfile -f fi
This section assumes that you've already obtained an ISO file from the Red Hat w
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.1", "/mnt/resource"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"] EOF ```
This section assumes that you've already obtained an ISO file from the Red Hat w
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.1", "/mnt/resource"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.device-timeout=2,x-systemd.requires=cloud-init.service", "0", "0"] EOF ```
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.1", "/mnt/resource"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.device-timeout=2,x-systemd.requires=cloud-init.service", "0", "0"] EOF
virtual-machines Suse Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/suse-create-upload-vhd.md
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
- device: ephemeral0.2 filesystem: swap mounts:
- - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.1", "/mnt/ressource"]
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"] EOF ```
virtual-machines Tutorial Automate Vm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md
Previously updated : 05/13/2022 Last updated : 04/06/2023 + #Customer intent: As an IT administrator or developer, I want learn about cloud-init so that I customize and configure Linux VMs in Azure on first boot to minimize the number of post-deployment configuration tasks required.
virtual-machines Tutorial Secure Web Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-secure-web-server.md
Previously updated : 12/9/2022 Last updated : 04/09/2023
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
Each event is scheduled a minimum amount of time in the future based on the even
| Preempt | 30 seconds | | Terminate | [User Configurable](../../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md#enable-terminate-notifications): 5 to 15 minutes |
-Once an event is scheduled it will move into the started state after it's either approved or the not before time passes. However in rare cases the operation will be cancelled by Azure before it starts. In that case the event will be removed from the Events array and the impact won't not occur as previously scheduled.
+Once an event is scheduled, it will move into the `Started` state after it's been approved or the `NotBefore` time passes. However, in rare cases, the operation will be cancelled by Azure before it starts. In that case the event will be removed from the Events array, and the impact will not occur as previously scheduled.
> [!NOTE] > In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there's a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.
virtual-machines Tutorial Automate Vm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-automate-vm-deployment.md
Title: Tutorial - Install applications on a Windows VM in Azure
-description: In this tutorial, you learn how to use the Custom Script Extension to run scripts and deploy applications to Windows virtual machines in Azure
+description: Learn how to use the Custom Script Extension to run scripts and deploy applications to Windows virtual machines in Azure.
Previously updated : 11/29/2018 Last updated : 04/07/2023
-#Customer intent: As an IT administrator or developer, I want learn about how to install applications on Windows VMs so that I can automate the process and reduce the risk of human error of manual configuration tasks.
+#Customer intent: As an IT administrator or developer, I want to learn about how to install applications on Windows VMs so that I can automate the process and reduce the risk of human error of manual configuration tasks.
# Tutorial - Deploy applications to a Windows virtual machine in Azure with the Custom Script Extension+ **Applies to:** :heavy_check_mark: Window :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets To configure virtual machines (VMs) in a quick and consistent manner, you can use the [Custom Script Extension for Windows](../extensions/custom-script-windows.md). In this tutorial you learn how to: > [!div class="checklist"]
-> * Use the Custom Script Extension to install IIS
-> * Create a VM that uses the Custom Script Extension
-> * View a running IIS site after the extension is applied
+> * Use the Custom Script Extension to install IIS.
+> * Create a VM that uses the Custom Script Extension.
+> * View a running IIS site after the extension is applied.
## Launch Azure Cloud Shell
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/powershell](https://shell.azure.com/powershell). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
+To open the Cloud Shell, select **Open Cloudshell** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/powershell](https://shell.azure.com/powershell). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
## Custom script extension overview
-The Custom Script Extension downloads and executes scripts on Azure VMs. This extension is useful for post deployment configuration, software installation, or any other configuration / management task. Scripts can be downloaded from Azure storage or GitHub, or provided to the Azure portal at extension run time.
-The Custom Script extension integrates with Azure Resource Manager templates, and can also be run using the Azure CLI, PowerShell, Azure portal, or the Azure Virtual Machine REST API.
+The Custom Script Extension downloads and executes scripts on Azure VMs. This extension is useful for post-deployment configuration, software installation, or any other configuration or management task. You can download scripts from Azure storage or GitHub, or you can provide scripts to the Azure portal at extension run time.
-You can use the Custom Script Extension with both Windows and Linux VMs.
+The Custom Script extension integrates with Azure Resource Manager templates and can be run by using the Azure CLI, PowerShell, Azure portal, or the Azure Virtual Machine REST API.
+You can use the Custom Script Extension with both Linux and Windows VMs.
## Create virtual machine+ Set the administrator username and password for the VM with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential): ```azurepowershell-interactive $cred = Get-Credential ```
-Now you can create the VM with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named *myVM* in the *EastUS* location. If they do not already exist, the resource group *myResourceGroupAutomate* and supporting network resources are created. To allow web traffic, the cmdlet also opens port *80*.
+Now you can create the VM with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named *myVM* in the *EastUS* location. If they don't already exist, the resource group *myResourceGroupAutomate* and supporting network resources are created. To allow web traffic, the cmdlet also opens port *80*.
```azurepowershell-interactive New-AzVm `
New-AzVm `
-Credential $cred ```
-It takes a few minutes for the resources and VM to be created.
-
+The resources and VM take a few minutes to be created.
## Automate IIS install+ Use [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) to install the Custom Script Extension. The extension runs `powershell Add-WindowsFeature Web-Server` to install the IIS webserver and then updates the *Default.htm* page to show the hostname of the VM: ```azurepowershell-interactive
Set-AzVMExtension -ResourceGroupName "myResourceGroupAutomate" `
-SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' ``` - ## Test web site
-Obtain the public IP address of your load balancer with [Get-AzPublicIPAddress](/powershell/module/az.network/get-azpublicipaddress). The following example obtains the IP address for *myPublicIPAddress* created earlier:
+
+Obtain the public IP address of your load balancer with [Get-AzPublicIPAddress](/powershell/module/az.network/get-azpublicipaddress). The following example obtains the IP address for `myPublicIPAddress` created earlier:
```azurepowershell-interactive Get-AzPublicIPAddress `
Get-AzPublicIPAddress `
You can then enter the public IP address in to a web browser. The website is displayed, including the hostname of the VM that the load balancer distributed traffic to as in the following example:
-![Running IIS website](./media/tutorial-automate-vm-deployment/running-iis-website.png)
- ## Next steps In this tutorial, you automated the IIS install on a VM. You learned how to: > [!div class="checklist"]
-> * Use the Custom Script Extension to install IIS
-> * Create a VM that uses the Custom Script Extension
-> * View a running IIS site after the extension is applied
+> * Use the Custom Script Extension to install IIS.
+> * Create a VM that uses the Custom Script Extension.
+> * View a running IIS site after the extension is applied.
Advance to the next tutorial to learn how to create custom VM images.
virtual-machines Tutorial Secure Web Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-secure-web-server.md
Title: "Tutorial: Secure a Windows web server with TLS/SSL certificates in Azure"
-description: In this tutorial, you learn how to use Azure PowerShell to secure a Windows virtual machine that runs the IIS web server with TLS/SSL certificates stored in Azure Key Vault.
+ Title: "Tutorial: Secure a Windows web server with TLS certificates in Azure"
+description: Learn how to use Azure PowerShell to secure a Windows virtual machine that runs the IIS web server with TLS certificates stored in Azure Key Vault.
Previously updated : 02/09/2018 Last updated : 04/05/2023
-#Customer intent: As an IT administrator or developer, I want to learn how to secure a web server with TLS/SSL certificates so that I can protect my customer data on web applications that I build and run.
+#Customer intent: As an IT administrator or developer, I want to learn how to secure a web server with TLS certificates so that I can protect my customer data on web applications that I build and run.
-# Tutorial: Secure a web server on a Windows virtual machine in Azure with TLS/SSL certificates stored in Key Vault
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+# Tutorial: Secure a web server on a Windows virtual machine in Azure with TLS certificates stored in Key Vault
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
> [!NOTE]
-> Currently this doc only works for Generalized images. If attempting this tutorial using a Specialized disk you will receive an error.
+> Currently, this doc only works for Generalized images. If you attempt this tutorial by using a Specialized disk you will receive an error.
-To secure web servers, a Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), certificate can be used to encrypt web traffic. These TLS/SSL certificates can be stored in Azure Key Vault, and allow secure deployments of certificates to Windows virtual machines (VMs) in Azure. In this tutorial you learn how to:
+To secure web servers, a Transport Layer Security (TLS) certificate can be used to encrypt web traffic. TLS certificates can be stored in Azure Key Vault and allow secure deployments of certificates to Windows virtual machines (VMs) in Azure. In this tutorial you learn how to:
> [!div class="checklist"]
-> * Create an Azure Key Vault
-> * Generate or upload a certificate to the Key Vault
-> * Create a VM and install the IIS web server
-> * Inject the certificate into the VM and configure IIS with a TLS binding
-
+> * Create an Azure Key Vault.
+> * Generate or upload a certificate to the Key Vault.
+> * Create a VM and install the IIS web server.
+> * Inject the certificate into the VM and configure IIS with a TLS binding.
## Launch Azure Cloud Shell
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/powershell](https://shell.azure.com/powershell). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+To open the Cloud Shell, just select **Open Cloudshell** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/powershell](https://shell.azure.com/powershell). Select **Copy** to copy the blocks of code, paste them into the Cloud Shell, and press enter to run them.
## Overview
-Azure Key Vault safeguards cryptographic keys and secrets, such certificates or passwords. Key Vault helps streamline the certificate management process and enables you to maintain control of keys that access those certificates. You can create a self-signed certificate inside Key Vault, or upload an existing, trusted certificate that you already own.
-Rather than using a custom VM image that includes certificates baked-in, you inject certificates into a running VM. This process ensures that the most up-to-date certificates are installed on a web server during deployment. If you renew or replace a certificate, you don't also have to create a new custom VM image. The latest certificates are automatically injected as you create additional VMs. During the whole process, the certificates never leave the Azure platform or are exposed in a script, command-line history, or template.
+Azure Key Vault safeguards cryptographic keys and secrets, such as certificates or passwords. Key Vault helps streamline the certificate management process and enables you to maintain control of keys that access those certificates. You can create a self-signed certificate inside Key Vault, or you can upload an existing, trusted certificate that you already own.
+Rather than by using a custom VM image that includes certificates baked-in, inject certificates into a running VM. This process ensures that the most up-to-date certificates are installed on a web server during deployment. If you renew or replace a certificate, you don't also have to create a new custom VM image. The latest certificates are automatically injected as you create more VMs. During the whole process, the certificates never leave the Azure platform or are exposed in a script, command-line history, or template.
## Create an Azure Key Vault+ Before you can create a Key Vault and certificates, create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *myResourceGroupSecureWeb* in the *East US* location: ```azurepowershell-interactive
$location = "East US"
New-AzResourceGroup -ResourceGroupName $resourceGroup -Location $location ```
-Next, create a Key Vault with [New-AzKeyVault](/powershell/module/az.keyvault/new-azkeyvault). Each Key Vault requires a unique name, and should be all lower case. Replace `mykeyvault` in the following example with your own unique Key Vault name:
+Next, create a Key Vault with [New-AzKeyVault](/powershell/module/az.keyvault/new-azkeyvault). Each Key Vault requires a unique name and should be all lower case. Replace `mykeyvault` with your own unique Key Vault name in the following example:
```azurepowershell-interactive $keyvaultName="mykeyvault"
New-AzKeyVault -VaultName $keyvaultName `
-EnabledForDeployment ```
-## Generate a certificate and store in Key Vault
-For production use, you should import a valid certificate signed by trusted provider with [Import-AzKeyVaultCertificate](/powershell/module/az.keyvault/import-azkeyvaultcertificate). For this tutorial, the following example shows how you can generate a self-signed certificate with [Add-AzKeyVaultCertificate](/powershell/module/az.keyvault/add-azkeyvaultcertificate) that uses the default certificate policy from [New-AzKeyVaultCertificatePolicy](/powershell/module/az.keyvault/new-azkeyvaultcertificatepolicy).
+## Generate a certificate and store it in Key Vault
+
+For production use, you should import a valid certificate signed by a trusted provider with [Import-AzKeyVaultCertificate](/powershell/module/az.keyvault/import-azkeyvaultcertificate). For this tutorial, the following example shows how you can generate a self-signed certificate with [Add-AzKeyVaultCertificate](/powershell/module/az.keyvault/add-azkeyvaultcertificate) that uses the default certificate policy from [New-AzKeyVaultCertificatePolicy](/powershell/module/az.keyvault/new-azkeyvaultcertificatepolicy).
```azurepowershell-interactive $policy = New-AzKeyVaultCertificatePolicy `
Add-AzKeyVaultCertificate `
-CertificatePolicy $policy ``` - ## Create a virtual machine+ Set an administrator username and password for the VM with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential): ```azurepowershell-interactive $cred = Get-Credential ```
-Now you can create the VM with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named *myVM* in the *EastUS* location. If they do not already exist, the supporting network resources are created. To allow secure web traffic, the cmdlet also opens port *443*.
+Now you can create the VM with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates a VM named *myVM* in the *EastUS* location. If they don't already exist, the supporting network resources are created. To allow secure web traffic, the cmdlet also opens port *443*.
```azurepowershell-interactive # Create a VM
Set-AzVMExtension -ResourceGroupName $resourceGroup `
It takes a few minutes for the VM to be created. The last step uses the Azure Custom Script Extension to install the IIS web server with [Set-AzVmExtension](/powershell/module/az.compute/set-azvmextension). - ## Add a certificate to VM from Key Vault+ To add the certificate from Key Vault to a VM, obtain the ID of your certificate with [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret). Add the certificate to the VM with [Add-AzVMSecret](/powershell/module/az.compute/add-azvmsecret): ```azurepowershell-interactive
$vm = Add-AzVMSecret -VM $vm -SourceVaultId $vaultId -CertificateStore "My" -Cer
Update-AzVM -ResourceGroupName $resourceGroup -VM $vm ``` - ## Configure IIS to use the certificate+ Use the Custom Script Extension again with [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) to update the IIS configuration. This update applies the certificate injected from Key Vault to IIS and configures the web binding: ```azurepowershell-interactive
Set-AzVMExtension -ResourceGroupName $resourceGroup `
-SettingString $publicSettings ``` - ### Test the secure web app+ Obtain the public IP address of your VM with [Get-AzPublicIPAddress](/powershell/module/az.network/get-azpublicipaddress). The following example obtains the IP address for `myPublicIP` created earlier: ```azurepowershell-interactive
Get-AzPublicIPAddress -ResourceGroupName $resourceGroup -Name "myPublicIPAddress
Now you can open a web browser and enter `https://<myPublicIP>` in the address bar. To accept the security warning if you used a self-signed certificate, select **Details** and then **Go on to the webpage**:
-![Accept web browser security warning](./media/tutorial-secure-web-server/browser-warning.png)
Your secured IIS website is then displayed as in the following example:
-![View running secure IIS site](./media/tutorial-secure-web-server/secured-iis.png)
- ## Next steps
-In this tutorial, you secured an IIS web server with a TLS/SSL certificate stored in Azure Key Vault. You learned how to:
+
+In this tutorial, you secured an IIS web server with a TLS certificate stored in Azure Key Vault. You learned how to:
> [!div class="checklist"]
-> * Create an Azure Key Vault
-> * Generate or upload a certificate to the Key Vault
-> * Create a VM and install the IIS web server
-> * Inject the certificate into the VM and configure IIS with a TLS binding
+> * Create an Azure Key Vault.
+> * Generate or upload a certificate to the Key Vault.
+> * Create a VM and install the IIS web server.
+> * Inject the certificate into the VM and configure IIS with a TLS binding.
-Follow this link to see pre-built virtual machine script samples.
+For prebuilt virtual machine script samples, see:
> [!div class="nextstepaction"] > [Windows virtual machine script samples](https://github.com/Azure/azure-docs-powershell-samples/tree/master/virtual-machine)
virtual-machines Configure Oracle Dataguard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-dataguard.md
To install Oracle Data Guard, you need to create two Azure VMs on the same avail
The Marketplace image that you use to create the VMs is Oracle:Oracle-Database-Ee:12.1.0.2:latest.
+> [!NOTE]
+> Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded images that are, at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204)
++ ### Sign in to Azure Sign in to your Azure subscription by using the [az login](/cli/azure/reference-index) command and follow the on-screen directions.
Create a VM by using the [az vm create](/cli/azure/vm) command.
The following example creates two VMs named `myVM1` and `myVM2`. It also creates SSH keys, if they do not already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option.
+> [!NOTE]
+> Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded images that are, at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204)
++ Create myVM1 (primary): ```azurecli az vm create \
az network nsg rule create --resource-group myResourceGroup\
Use the following command to create an SSH session with the virtual machine. Replace the IP address with the `publicIpAddress` value for your virtual machine. ```bash
-$ ssh azureuser@<publicIpAddress>
+ssh azureuser@<publicIpAddress>
``` ### Create the database on myVM1 (primary)
The Oracle software is already installed on the Marketplace image, so the next s
Switch to the Oracle superuser: ```bash
-$ sudo su - oracle
+sudo su - oracle
``` Create the database: ```bash
-$ dbca -silent \
+dbca -silent \
-createDatabase \ -templateName General_Purpose.dbc \ -gdbname cdb1 \
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb1/cdb1.log" for furthe
Set the ORACLE_SID and ORACLE_HOME variables: ```bash
-$ ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
-$ ORACLE_SID=cdb1; export ORACLE_SID
+ ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
+ ORACLE_SID=cdb1; export ORACLE_SID
``` Optionally, you can add ORACLE_HOME and ORACLE_SID to the /home/oracle/.bashrc file, so that these settings are saved for future logins:
export ORACLE_SID=cdb1
### Enable archive log mode on myVM1 (primary) ```bash
-$ sqlplus / as sysdba
+sqlplus / as sysdba
SQL> SELECT log_mode FROM v$database; LOG_MODE
ADR_BASE_LISTENER = /u01/app/oracle
Enable Data Guard Broker: ```bash
-$ sqlplus / as sysdba
+sqlplus / as sysdba
SQL> ALTER SYSTEM SET dg_broker_start=true; SQL> EXIT; ```
SQL> EXIT;
Start the listener: ```bash
-$ lsnrctl stop
-$ lsnrctl start
+ lsnrctl stop
+ lsnrctl start
``` ### Set up service on myVM2 (standby)
$ lsnrctl start
SSH to myVM2: ```bash
-$ ssh azureuser@<publicIpAddress>
+ssh azureuser@<publicIpAddress>
``` Log in as Oracle: ```bash
-$ sudo su - oracle
+sudo su - oracle
``` Edit or create the tnsnames.ora file, which is in the $ORACLE_HOME\network\admin folder.
ADR_BASE_LISTENER = /u01/app/oracle
Start the listener: ```bash
-$ lsnrctl stop
-$ lsnrctl start
+ lsnrctl stop
+ lsnrctl start
```
mkdir -p /u01/app/oracle/admin/cdb1/adump
Create a password file: ```bash
-$ orapwd file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwcdb1 password=OraPasswd1 entries=10
+ orapwd file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwcdb1 password=OraPasswd1 entries=10
``` Start the database on myVM2: ```bash
-$ export ORACLE_SID=cdb1
-$ sqlplus / as sysdba
+ export ORACLE_SID=cdb1
+ sqlplus / as sysdba
SQL> STARTUP NOMOUNT PFILE='/tmp/initcdb1_stby.ora'; SQL> EXIT;
SQL> EXIT;
Restore the database by using the RMAN tool: ```bash
-$ rman TARGET sys/OraPasswd1@cdb1 AUXILIARY sys/OraPasswd1@cdb1_stby
+ rman TARGET sys/OraPasswd1@cdb1 AUXILIARY sys/OraPasswd1@cdb1_stby
``` Run the following commands in RMAN:
export ORACLE_SID=cdb1
Enable Data Guard Broker: ```bash
-$ sqlplus / as sysdba
+sqlplus / as sysdba
SQL> ALTER SYSTEM SET dg_broker_start=true; SQL> EXIT; ```
SQL> EXIT;
Start Data Guard Manager and log in by using SYS and a password. (Do not use OS authentication.) Perform the following: ```bash
-$ dgmgrl sys/OraPasswd1@cdb1
+ dgmgrl sys/OraPasswd1@cdb1
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved.
cdb1_stby=
Start SQL*Plus: ```bash
-$ sqlplus sys/OraPasswd1@cdb1
+sqlplus sys/OraPasswd1@cdb1
SQL*Plus: Release 12.2.0.1.0 Production on Wed May 10 14:18:31 2017 Copyright (c) 1982, 2016, Oracle. All rights reserved.
SQL>
To switch from primary to standby (cdb1 to cdb1_stby): ```bash
-$ dgmgrl sys/OraPasswd1@cdb1
+dgmgrl sys/OraPasswd1@cdb1
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved.
Start SQL*Plus:
```bash
-$ sqlplus sys/OraPasswd1@cdb1_stby
+sqlplus sys/OraPasswd1@cdb1_stby
SQL*Plus: Release 12.2.0.1.0 Production on Wed May 10 14:18:31 2017 Copyright (c) 1982, 2016, Oracle. All rights reserved.
SQL>
To switch over, run the following on myVM2: ```bash
-$ dgmgrl sys/OraPasswd1@cdb1_stby
+dgmgrl sys/OraPasswd1@cdb1_stby
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved.
Start SQL*Plus:
```bash
-$ sqlplus sys/OraPasswd1@cdb1
+sqlplus sys/OraPasswd1@cdb1
SQL*Plus: Release 12.2.0.1.0 Production on Wed May 10 14:18:31 2017 Copyright (c) 1982, 2016, Oracle. All rights reserved.
virtual-machines Configure Oracle Golden Gate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-golden-gate.md
The following is a summary of the environment configuration:
> | **Golden Gate owner/replicate** |C##GGADMIN |REPUSER | > | **Golden Gate process** |EXTORA |REPORA|
+> [!NOTE]
+> Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded images that are, at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204)
+ ### Sign in to Azure
az network nsg rule create --resource-group myResourceGroup\
### Connect to the virtual machine
+> [!NOTE]
+> Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded images that are, at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204)
++ Use the following command to create an SSH session with the virtual machine. Replace the IP address with the `publicIpAddress` of your virtual machine. ```bash
sudo su - oracle
Create the database: ```bash
-$ dbca -silent \
+ dbca -silent \
-createDatabase \ -templateName General_Purpose.dbc \ -gdbname cdb1 \
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb1/cdb1.log" for more d
Set the ORACLE_SID and ORACLE_HOME variables. ```bash
-$ ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
-$ ORACLE_SID=cdb1; export ORACLE_SID
-$ LD_LIBRARY_PATH=ORACLE_HOME/lib; export LD_LIBRARY_PATH
+ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
+ORACLE_SID=cdb1; export ORACLE_SID
+LD_LIBRARY_PATH=ORACLE_HOME/lib; export LD_LIBRARY_PATH
``` Optionally, you can add ORACLE_HOME and ORACLE_SID to the .bashrc file, so that these settings are saved for future sign-ins:
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
### Start Oracle listener ```bash
-$ lsnrctl start
+lsnrctl start
``` ### Create the database on myVM2 (replicate)
sudo su - oracle
Create the database: ```bash
-$ dbca -silent \
+ dbca -silent \
-createDatabase \ -templateName General_Purpose.dbc \ -gdbname cdb1 \
$ dbca -silent \
Set the ORACLE_SID and ORACLE_HOME variables. ```bash
-$ ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
-$ ORACLE_SID=cdb1; export ORACLE_SID
-$ LD_LIBRARY_PATH=ORACLE_HOME/lib; export LD_LIBRARY_PATH
+ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
+ORACLE_SID=cdb1; export ORACLE_SID
+LD_LIBRARY_PATH=ORACLE_HOME/lib; export LD_LIBRARY_PATH
``` Optionally, you can added ORACLE_HOME and ORACLE_SID to the .bashrc file, so that these settings are saved for future sign-ins.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
### Start Oracle listener ```bash
-$ sudo su - oracle
-$ lsnrctl start
+sudo su - oracle
+lsnrctl start
``` ## Configure Golden Gate
To configure Golden Gate, take the steps in this section.
### Enable archive log mode on myVM1 (primary) ```bash
-$ sqlplus / as sysdba
+sqlplus / as sysdba
SQL> SELECT log_mode FROM v$database; LOG_MODE
To download and prepare the Oracle Golden Gate software, complete the following
2. After you download the .zip files to your client computer, use Secure Copy Protocol (SCP) to copy the files to your VM: ```bash
- $ scp fbo_ggs_Linux_x64_shiphome.zip <publicIpAddress>:<folder>
+ scp fbo_ggs_Linux_x64_shiphome.zip <publicIpAddress>:<folder>
``` 3. Move the .zip files to the **/opt** folder. Then change the owner of the files as follows: ```bash
- $ sudo su -
- # mv <folder>/*.zip /opt
+ sudo su -
+ mv <folder>/*.zip /opt
``` 4. Unzip the files (install the Linux unzip utility if it's not already installed): ```bash
- # yum install unzip
- # cd /opt
- # unzip fbo_ggs_Linux_x64_shiphome.zip
+ yum install unzip
+ cd /opt
+ unzip fbo_ggs_Linux_x64_shiphome.zip
``` 5. Change permission: ```bash
- # chown -R oracle:oinstall /opt/fbo_ggs_Linux_x64_shiphome
+ chown -R oracle:oinstall /opt/fbo_ggs_Linux_x64_shiphome
``` ### Prepare the client and VM to run x11 (for Windows clients only)
This is an optional step. You can skip this step if you are using a Linux client
4. In your VM, run these commands: ```bash
- # sudo su - oracle
- $ mkdir .ssh (if not already created)
- $ cd .ssh
+ sudo su - oracle
+ mkdir .ssh (if not already created)
+ cd .ssh
``` 5. Create a file named **authorized_keys**. Paste the contents of the key in this file, and then save the file.
To install Oracle Golden Gate, complete the following steps:
1. Sign in as oracle. (You should be able to sign in without being prompted for a password.) Make sure that Xming is running before you begin the installation. ```bash
- $ cd /opt/fbo_ggs_Linux_x64_shiphome/Disk1
- $ ./runInstaller
+ cd /opt/fbo_ggs_Linux_x64_shiphome/Disk1
+ ./runInstaller
``` 2. Select 'Oracle GoldenGate for Oracle Database 12c'. Then select **Next** to continue.
To install Oracle Golden Gate, complete the following steps:
1. Create or update the tnsnames.ora file: ```bash
- $ cd $ORACLE_HOME/network/admin
- $ vi tnsnames.ora
+ cd $ORACLE_HOME/network/admin
+ vi tnsnames.ora
cdb1= (DESCRIPTION=
To install Oracle Golden Gate, complete the following steps:
> ```bash
- $ sqlplus / as sysdba
+ sqlplus / as sysdba
SQL> CREATE USER C##GGADMIN identified by ggadmin; SQL> EXEC dbms_goldengate_auth.grant_admin_privilege('C##GGADMIN',container=>'ALL'); SQL> GRANT DBA to C##GGADMIN container=all;
To install Oracle Golden Gate, complete the following steps:
3. Create the Golden Gate test user account: ```bash
- $ cd /u01/app/oracle/product/12.1.0/oggcore_1
- $ sqlplus system/OraPasswd1@pdb1
+ cd /u01/app/oracle/product/12.1.0/oggcore_1
+ sqlplus system/OraPasswd1@pdb1
SQL> CREATE USER test identified by test DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP; SQL> GRANT connect, resource, dba TO test; SQL> ALTER USER test QUOTA 100M on USERS;
To install Oracle Golden Gate, complete the following steps:
Start the Golden gate command-line interface (ggsci): ```bash
- $ sudo su - oracle
- $ cd /u01/app/oracle/product/12.1.0/oggcore_1
- $ ./ggsci
+ sudo su - oracle
+ cd /u01/app/oracle/product/12.1.0/oggcore_1
+ ./ggsci
GGSCI> DBLOGIN USERID test@pdb1, PASSWORD test Successfully logged into database pdb1 GGSCI> ADD SCHEMATRANDATA pdb1.test
To install Oracle Golden Gate, complete the following steps:
6. Register extract--integrated extract: ```bash
- $ cd /u01/app/oracle/product/12.1.0/oggcore_1
- $ ./ggsci
+ cd /u01/app/oracle/product/12.1.0/oggcore_1
+ ./ggsci
GGSCI> dblogin userid C##GGADMIN, password ggadmin Successfully logged into database CDB$ROOT.
To install Oracle Golden Gate, complete the following steps:
7. Set up extract checkpoints and start real-time extract: ```bash
- $ ./ggsci
+ ./ggsci
GGSCI> ADD EXTRACT EXTORA, INTEGRATED TRANLOG, BEGIN NOW EXTRACT (Integrated) added.
To install Oracle Golden Gate, complete the following steps:
In this step, you find the starting SCN, which will be used later, in a different section: ```bash
- $ sqlplus / as sysdba
+ sqlplus / as sysdba
SQL> alter session set container = pdb1; SQL> SELECT current_scn from v$database; CURRENT_SCN
To install Oracle Golden Gate, complete the following steps:
``` ```bash
- $ ./ggsci
+ ./ggsci
GGSCI> EDIT PARAMS INITEXT ```
To install Oracle Golden Gate, complete the following steps:
1. Create or update the tnsnames.ora file: ```bash
- $ cd $ORACLE_HOME/network/admin
- $ vi tnsnames.ora
+ cd $ORACLE_HOME/network/admin
+ vi tnsnames.ora
cdb1= (DESCRIPTION=
To install Oracle Golden Gate, complete the following steps:
2. Create a replicate account: ```bash
- $ sqlplus / as sysdba
+ sqlplus / as sysdba
SQL> alter session set container = pdb1; SQL> create user repuser identified by rep_pass container=current; SQL> grant dba to repuser;
To install Oracle Golden Gate, complete the following steps:
3. Create a Golden Gate test user account: ```bash
- $ cd /u01/app/oracle/product/12.1.0/oggcore_1
- $ sqlplus system/OraPasswd1@pdb1
+ cd /u01/app/oracle/product/12.1.0/oggcore_1
+ sqlplus system/OraPasswd1@pdb1
SQL> CREATE USER test identified by test DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP; SQL> GRANT connect, resource, dba TO test; SQL> ALTER USER test QUOTA 100M on USERS;
To install Oracle Golden Gate, complete the following steps:
4. REPLICAT parameter file to replicate changes: ```bash
- $ cd /u01/app/oracle/product/12.1.0/oggcore_1
- $ ./ggsci
+ cd /u01/app/oracle/product/12.1.0/oggcore_1
+ ./ggsci
GGSCI> EDIT PARAMS REPORA ```
To install Oracle Golden Gate, complete the following steps:
#### 1. Set up the replication on myVM2 (replicate) ```bash
- $ cd /u01/app/oracle/product/12.1.0/oggcore_1
- $ ./ggsci
+ cd /u01/app/oracle/product/12.1.0/oggcore_1
+ ./ggsci
GGSCI> EDIT PARAMS MGR ```
Then restart the Manager service:
Start the initial load and check for errors: ```bash
-$ cd /u01/app/oracle/product/12.1.0/oggcore_1
-$ ./ggsci
+cd /u01/app/oracle/product/12.1.0/oggcore_1
+./ggsci
GGSCI> START EXTRACT INITEXT GGSCI> VIEW REPORT INITEXT ```
GGSCI> VIEW REPORT INITEXT
Change the SCN number with the number you obtained before: ```bash
- $ cd /u01/app/oracle/product/12.1.0/oggcore_1
- $ ./ggsci
+ cd /u01/app/oracle/product/12.1.0/oggcore_1
+ ./ggsci
START REPLICAT REPORA, AFTERCSN 1857887 ```
az group delete --name myResourceGroup
[Create highly available virtual machines tutorial](../../linux/create-cli-complete.md)
-[Explore VM deployment CLI samples](https://github.com/Azure-Samples/azure-cli-samples/tree/master/virtual-machine)
+[Explore VM deployment CLI samples](https://github.com/Azure-Samples/azure-cli-samples/tree/master/virtual-machine)
virtual-machines Oracle Database Backup Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup.md
Perform the following steps for each database on the VM:
1. Before you connect, you need to set the environment variable ORACLE_SID by running the `oraenv` script which will prompt you to enter the ORACLE_SID name: ```bash
- $ . oraenv
+ . oraenv
``` 1. Add the Azure Files share as an additional database archive log file destination
Later in this article, you'll learn how to test the recovery process. Before you
> ```bash
- $ scp vmoracle19c_xxxxxx_xxxxxx_xxxxxx.py azureuser@<publicIpAddress>:/tmp
+ scp vmoracle19c_xxxxxx_xxxxxx_xxxxxx.py azureuser@<publicIpAddress>:/tmp
``` # [Azure CLI](#tab/azure-cli)
Replace myRecoveryPointName with the name of the recovery point that you obtaine
The script is downloaded and a password is displayed, as in the following example:
-```bash
+```output
File downloaded: vmoracle19c_eus_4598131610710119312_456133188157_6931c635931f402eb543ee554e1cf06f102c6fc513d933.py. Use password c4487e40c760d29 ```
The following example shows how you to use a secure copy (scp) command to move t
> ```bash
-$ scp vmoracle19c_xxxxxx_xxxxxx_xxxxxx.py azureuser@<publicIpAddress>:/tmp
+scp vmoracle19c_xxxxxx_xxxxxx_xxxxxx.py azureuser@<publicIpAddress>:/tmp
```
$ scp vmoracle19c_xxxxxx_xxxxxx_xxxxxx.py azureuser@<publicIpAddress>:/tmp
To exit, enter **q**, and then search for the mounted volumes. To create a list of the added volumes, at a command prompt, enter **df -h**.
- ```
+ ```output
[root@vmoracle19c restore]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev
Now the database has been restored you must recover the database. Please follow
1. You may find that the instance is running as the auto start has attempted to start the database on VM boot. However the database requires recovery and is likely to be at mount stage only, so a preparatory shutdown is run first followed by starting to mount stage. ```bash
- $ sudo su - oracle
- $ sqlplus / as sysdba
+ sudo su - oracle
+ sqlplus / as sysdba
SQL> shutdown immediate SQL> startup mount ```
Now the database has been restored you must recover the database. Please follow
``` Copy the logfile path and file name for the CURRENT online log, in this example it is `/u02/oradata/ORATEST1/redo01.log`. Switch back to the ssh session running the recover command, input the logfile information and press return:
- ```bash
+ ```output
Specify log: {<RET>=suggested | filename | AUTO | CANCEL} /u02/oradata/ORATEST1/redo01.log ```
virtual-machines Oracle Database Backup Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-storage.md
**Applies to:** :heavy_check_mark: Linux VMs
-This article demonstrates the use of Azure Files as a media to back up and restore an Oracle database running on an Azure VM. The steps in this article have been tested against Oracle 12.1 and higher. You will back up the database using Oracle RMAN to an Azure file share mounted to the VM using the SMB protocol. Using Azure Files for backup media is extremely cost effective and performant. However, for very large databases, Azure Backup provides a better solution.
+This article demonstrates the use of Azure Files as a media to back up and restore an Oracle database running on an Azure VM. The steps in this article have been tested against Oracle 12.1 and higher. You'll back up the database using Oracle RMAN to an Azure file share mounted to the VM using the SMB protocol. Using Azure Files for backup media is extremely cost effective and performant. However, for very large databases, Azure Backup provides a better solution.
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
This article demonstrates the use of Azure Files as a media to back up and resto
echo "export ORACLE_SID=test" >> ~oracle/.bashrc ```
-6. Start the Oracle listener if it is not already running:
+6. Start the Oracle listener if it isn't already running:
```bash
- $ lsnrctl start
+ lsnrctl start
``` The output should look similar to the following example:
This article demonstrates the use of Azure Files as a media to back up and resto
sqlplus / as sysdba ```
-9. Start the database if it is not already running:
+9. Start the database if it isn't already running:
```bash SQL> startup
To back up to Azure Files, complete these steps:
1. [Set up Azure Files](#set-up-azure-files). 1. [Mount the Azure file share to your VM](#mount-the-azure-storage-file-share-to-your-vm).
-1. [Back up the database](#backup-the-database).
+1. [Back up the database](#back-up-the-database).
1. [Restore and recover the database](#restore-and-recover-the-database). ### Set up Azure Files
-In this step, you will back up the Oracle database using Oracle Recovery Manager (RMAN) to Azure Files. Azure file shares are fully managed file shares that live in the cloud. They can be accessed using either the Server Message Block (SMB) protocol or the Network File System (NFS) protocol. This step covers creating a file share that uses the SMB protocol to mount to your VM. For information about how to mount using NFS, see [How to create an NFS share](../../../storage/files/storage-files-how-to-create-nfs-shares.md).
+In this step, you'll back up the Oracle database using Oracle Recovery Manager (RMAN) to Azure Files. Azure file shares are fully managed file shares that live in the cloud. They can be accessed using either the Server Message Block (SMB) protocol or the Network File System (NFS) protocol. This step covers creating a file share that uses the SMB protocol to mount to your VM. For information about how to mount using NFS, see [How to create an NFS share](../../../storage/files/storage-files-how-to-create-nfs-shares.md).
-When mounting the Azure Files, we will use the `cache=none` to disable caching of file share data. And to ensure files created in the share are owned by the oracle user set the `uid=oracle` and `gid=oinstall` options as well.
+When mounting the Azure Files, we'll use the `cache=none` to disable caching of file share data. And to ensure files created in the share are owned by the oracle user set the `uid=oracle` and `gid=oinstall` options as well.
# [Portal](#tab/azure-portal)
First, set up your storage account.
![Screenshot that shows where to select File shares.](./media/oracle-backup-recovery/file-storage-3.png)
-5. Click on ***+ File share*** and in the ***New file share*** blade name your file share ***orabkup1***. Set ***Quota*** to ***10240*** GiB and check ***Transaction optimized*** as the tier. The quota reflects an upper boundary that the file share can grow to. As we are using Standard storage, resources are PAYG and not provisioned so setting it to 10 TiB will not incur costs beyond what you use. If your backup strategy requires more storage, you must set the quota to an appropriate level to hold all backups. When you have completed the New file share blade, click ***Create***.
+5. Click on ***+ File share*** and in the ***New file share*** blade name your file share ***orabkup1***. Set ***Quota*** to ***10240*** GiB and check ***Transaction optimized*** as the tier. The quota reflects an upper boundary that the file share can grow to. As we're using Standard storage, resources are PAYG and not provisioned so setting it to 10 TiB will not incur costs beyond what you use. If your backup strategy requires more storage, you must set the quota to an appropriate level to hold all backups. When you have completed the New file share blade, click ***Create***.
![Screenshot that shows where to add a new file share.](./media/oracle-backup-recovery/file-storage-4.png)
To set up your storage account and file share run the following commands in Azur
//orabackup1.file.core.windows.net/orabackup 10T 0 10T 0% /mnt/orabackup ```
-### Backup the database
+### Back up the database
-In this section, we will be using Oracle Recovery Manager (RMAN) to take a full backup of the database and archive logs and write the backup as a backup set to the Azure File share mounted earlier.
+In this section, we'll be using Oracle Recovery Manager (RMAN) to take a full backup of the database and archive logs and write the backup as a backup set to the Azure File share mounted earlier.
1. Configure RMAN to back up to the Azure Files mount point: ```bash
- $ rman target /
+ rman target /
RMAN> configure snapshot controlfile name to '/mnt/orabkup/snapcf_ev.f'; RMAN> configure channel 1 device type disk format '/mnt/orabkup/%d/Full_%d_%U_%T_%s'; RMAN> configure channel 2 device type disk format '/mnt/orabkup/%d/Full_%d_%U_%T_%s'; ```
-2. In this example, we are limiting the size of RMAN backup pieces to 1 TiB. Please note the RMAN backup MAXPIECESIZE can go upto 4TiB as Azure standard file shares and Premium File Shares have a maximum file size limit of 4 TiB. For more information, see [Azure Files Scalability and Performance Targets](../../../storage/files/storage-files-scale-targets.md).)
+2. In this example, we're limiting the size of RMAN backup pieces to 1 TiB. Please note the RMAN backup MAXPIECESIZE can go upto 4TiB as Azure standard file shares and Premium File Shares have a maximum file size limit of 4 TiB. For more information, see [Azure Files Scalability and Performance Targets](../../../storage/files/storage-files-scale-targets.md).)
```bash RMAN> configure channel device type disk maxpiecesize 4000G;
virtual-machines Redhat In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-in-place-upgrade.md
During an in-place upgrade, the earlier RHEL OS major version will be replaced w
## Upgrade from RHEL 7 VMs to RHEL 8 VMs Instructions for an in-place upgrade from Red Hat Enterprise Linux 7 VMs to Red Hat Enterprise Linux 8 VMs on Azure is provided at the [Red Hat upgrading from RHEL 7 to RHEL 8 documentation here.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/upgrading_from_rhel_7_to_rhel_8/index)
+## Upgrade from RHEL 8 VMs to RHEL 9 VMs
+Instructions for an in-place upgrade from Red Hat Enterprise Linux 8 VMs to Red Hat Enterprise Linux 9 VMs on Azure is provided at the [Red Hat upgrading from RHEL 8 to RHEL 9 documentation here.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/upgrading_from_rhel_8_to_rhel_9/index)
## Upgrade SAP environments from RHEL 7 VMs to RHEL 8 VMs Instructions for an in-place upgrade from Red Hat Enterprise Linux 7 SAP VMs to Red Hat Enterprise Linux 8 SAP VMs on Azure is provided at the [Red Hat upgrading from RHEL 7 SAP to RHEL 8 SAP documentation here.](https://access.redhat.com/solutions/5154031)
+## Upgrade SAP environments from RHEL 8 VMs to RHEL 9 VMs
+Instructions for an in-place upgrade from Red Hat Enterprise Linux 8 SAP VMs to Red Hat Enterprise Linux 9 SAP VMs on Azure is provided at the [Red Hat upgrading from RHEL 8 SAP to RHEL 9 SAP documentation here.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_for_sap_solutions/9/html-single/how_to_in-place_upgrade_sap_environments_from_rhel_8_to_rhel_9/index)
+++ ## Next steps * Learn more about [Red Hat images in Azure](./redhat-images.md). * Learn more about [Red Hat update infrastructure](./redhat-rhui.md). * Learn more about the [RHEL BYOS offer](./byos.md). * To learn more about the Red Hat in-place upgrade processes, see [Upgrading from RHEL 7 TO RHEL 8](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/upgrading_from_rhel_7_to_rhel_8/index) in the Red Hat documentation.
-* To learn more about Red Hat support policies for all versions of RHEL, see [Red Hat Enterprise Linux life cycle](https://access.redhat.com/support/policy/updates/errata) in the Red Hat documentation.
+* To learn more about Red Hat support policies for all versions of RHEL, see [Red Hat Enterprise Linux life cycle](https://access.redhat.com/support/policy/updates/errata) in the Red Hat documentation.
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
> [!NOTE] > It is also recommended to create a ROA for any existing ASN that is advertising the range to avoid any issues during migration.
+> [!IMPORTANT]
+> While Microsoft will not stop advertising the range after the specified date, it is strongly recommended to independently create a follow-up ROA if the original expiration date has passed to avoid external carriers from not accepting the advertisement.
+ ### Certificate readiness To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft, where it will be verified to determine its authenticity and correctness as part of the provisioning process. > [!NOTE]
-> It is also recommended to create a ROA for any existing ASN that is advertising the range to avoid any issues during migration. Also note that Microsoft will not stop advertising the range after the specified date, but it is recommended to independently create a follow-up ROA if the original expiration date has passed.
+> It is also recommended to create a ROA for any existing ASN that is advertising the range to avoid any issues during migration.
+
+> [!IMPORTANT]
+> While Microsoft will not stop advertising the range after the specified date, it is strongly recommended to independently create a follow-up ROA if the original expiration date has passed to avoid external carriers from not accepting the advertisement.
### Certificate readiness
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft, where it will be verified to determine its authenticity and correctness as part of the provisioning process. > [!NOTE]
-> It is also recommended to create a ROA for any existing ASN that is advertising the range to avoid any issues during migration. Also note that Microsoft will not stop advertising the range after the specified date, but it is recommended to independently create a follow-up ROA if the original expiration date has passed.
+> It is also recommended to create a ROA for any existing ASN that is advertising the range to avoid any issues during migration.
+
+> [!IMPORTANT]
+> While Microsoft will not stop advertising the range after the specified date, it is strongly recommended to independently create a follow-up ROA if the original expiration date has passed to avoid external carriers from not accepting the advertisement.
### Certificate readiness
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
description: Learn about VPN Gateway resources and configuration settings.
Previously updated : 04/22/2022 Last updated : 04/07/2023 ms.devlang: azurecli
If you use the Azure portal to create a Resource Manager virtual network gateway
**PowerShell**
-The following PowerShell example specifies the `-GatewaySku` as VpnGw1. When using PowerShell to create a gateway, you have to first create the IP configuration, then use a variable to refer to it. In this example, the configuration variable is $gwipconfig.
+The following PowerShell example specifies the `-GatewaySku` as VpnGw1. When using PowerShell to create a gateway, you must first create the IP configuration, then use a variable to refer to it. In this example, the configuration variable is $gwipconfig.
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 `
az network vnet-gateway create --name VNet1GW --public-ip-address VNet1GWPIP --r
### <a name="resizechange"></a>Resizing or changing a SKU
-If you have a VPN gateway and you want to use a different gateway SKU, your options are to either resize your gateway SKU, or to change to another SKU. When you change to another gateway SKU, you delete the existing gateway entirely and build a new one. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. In comparison, when you resize a gateway SKU, there isn't much downtime because you don't have to delete and rebuild the gateway. If you have the option to resize your gateway SKU, rather than change it, you'll want to do that. However, there are rules regarding resizing:
+If you have a VPN gateway and you want to use a different gateway SKU, your options are to either resize your gateway SKU, or to change to another SKU. When you change to another gateway SKU, you delete the existing gateway entirely and build a new one. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. In comparison, when you resize a gateway SKU, there isn't much downtime because you don't have to delete and rebuild the gateway. While it's faster to resize your gateway SKU, there are rules regarding resizing:
1. Except for the Basic SKU, you can resize a VPN gateway SKU to another VPN gateway SKU within the same generation (Generation1 or Generation2). For example, VpnGw1 of Generation1 can be resized to VpnGw2 of Generation1 but not to VpnGw2 of Generation2.
-2. When working with the old gateway SKUs, you can resize between Standard and HighPerformance SKUs.
-3. You **cannot** resize from Basic/Standard/HighPerformance SKUs to VpnGw SKUs. You must instead, [change](#change) to the new SKUs.
+1. When working with the old gateway SKUs, you can resize between Standard and HighPerformance SKUs.
+1. You **cannot** resize from Basic/Standard/HighPerformance SKUs to VpnGw SKUs. You must instead, [change](#change) to the new SKUs.
#### <a name="resizegwsku"></a>To resize a gateway
New-AzVirtualNetworkGatewayConnection -Name localtovon -ResourceGroupName testrg
## <a name="vpntype"></a>VPN types
-When you create the virtual network gateway for a VPN gateway configuration, you must specify a VPN type. The VPN type that you choose depends on the connection topology that you want to create. For example, a P2S connection requires a RouteBased VPN type. A VPN type can also depend on the hardware that you're using. S2S configurations require a VPN device. Some VPN devices only support a certain VPN type.
+When you create the virtual network gateway for a VPN gateway configuration, you must specify a *VPN type*. The VPN type that you choose depends on the connection topology that you want to create. For example, a P2S connection requires a RouteBased VPN type. A VPN type can also depend on the hardware that you're using. S2S configurations require a VPN device. Some VPN devices only support a certain VPN type.
-The VPN type you select must satisfy all the connection requirements for the solution you want to create. For example, if you want to create a S2S VPN gateway connection and a P2S VPN gateway connection for the same virtual network, you would use VPN type *RouteBased* because P2S requires a RouteBased VPN type. You would also need to verify that your VPN device supported a RouteBased VPN connection.
+The VPN type you select must satisfy all the connection requirements for the solution you want to create. For example, if you want to create a S2S VPN gateway connection and a P2S VPN gateway connection for the same virtual network, use the VPN type *RouteBased* because P2S requires a RouteBased VPN type. You also need to verify that your VPN device supported a RouteBased VPN connection.
+
+Once a virtual network gateway has been created, you can't change the VPN type. If you want a different VPN type, first delete the virtual network gateway, and then create a new gateway.
-Once a virtual network gateway has been created, you can't change the VPN type. You have to delete the virtual network gateway and create a new one.
There are two VPN types: [!INCLUDE [vpn-gateway-vpntype](../../includes/vpn-gateway-vpntype-include.md)]
New-AzVirtualNetworkGateway -Name vnetgw1 -ResourceGroupName testrg `
## <a name="gwsub"></a>Gateway subnet
-Before you create a VPN gateway, you must create a gateway subnet. The gateway subnet contains the IP addresses that the virtual network gateway VMs and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required VPN gateway settings. Never deploy anything else (for example, additional VMs) to the gateway subnet. The gateway subnet must be named 'GatewaySubnet' to work properly. Naming the gateway subnet 'GatewaySubnet' lets Azure know that this is the subnet to deploy the virtual network gateway VMs and services to.
+Before you create a VPN gateway, you must create a gateway subnet. The gateway subnet contains the IP addresses that the virtual network gateway VMs and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required VPN gateway settings. Never deploy anything else (for example, additional VMs) to the gateway subnet. The gateway subnet must be named 'GatewaySubnet' to work properly. Naming the gateway subnet 'GatewaySubnet' lets Azure know that this is the subnet to which it should deploy the virtual network gateway VMs and services.
>[!NOTE] >[!INCLUDE [vpn-gateway-gwudr-warning.md](../../includes/vpn-gateway-gwudr-warning.md)] >
-When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
+When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
-When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Additionally, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future additional configurations. While you can create a gateway subnet as small as /29 (applicable to Basic SKU only), we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.). This will accommodate most configurations.
+When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Additionally, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future additional configurations. While you can create a gateway subnet as small as /29 (applicable to Basic SKU only), we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.). This accommodates most configurations.
The following Resource Manager PowerShell example shows a gateway subnet named GatewaySubnet. You can see the CIDR notation specifies a /27, which allows for enough IP addresses for most configurations that currently exist.
Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.0.3.0/2
A local network gateway is different than a virtual network gateway. When creating a VPN gateway configuration, the local network gateway usually represents your on-premises network and the corresponding VPN device. In the classic deployment model, the local network gateway was referred to as a Local Site.
-You give the local network gateway a name, the public IP address or the fully qualified domain name (FQDN) of the on-premises VPN device, and specify the address prefixes that are located on the on-premises location. Azure looks at the destination address prefixes for network traffic, consults the configuration that you've specified for your local network gateway, and routes packets accordingly. If you use Border Gateway Protocol (BGP) on your VPN device, you'll provide the BGP peer IP address of your VPN device and the autonomous system number (ASN) of your on-premises network. You also specify local network gateways for VNet-to-VNet configurations that use a VPN gateway connection.
+When you configure a local network gateway, you specify the name, the public IP address or the fully qualified domain name (FQDN) of the on-premises VPN device, and the address prefixes that are located on the on-premises location. Azure looks at the destination address prefixes for network traffic, consults the configuration that you've specified for your local network gateway, and routes packets accordingly. If you use Border Gateway Protocol (BGP) on your VPN device, you provide the BGP peer IP address of your VPN device and the autonomous system number (ASN) of your on-premises network. You also specify local network gateways for VNet-to-VNet configurations that use a VPN gateway connection.
The following PowerShell example creates a new local network gateway:
vpn-gateway Vpn Gateway Create Site To Site Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md
This article shows you how to use PowerShell to create a Site-to-Site VPN gatewa
A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
-![Site-to-Site VPN Gateway cross-premises connection diagram](./media/vpn-gateway-create-site-to-site-rm-powershell/site-to-site-diagram.png)
## <a name="before"></a>Before you begin
The examples in this article use the following values. You can use these values
VnetName = VNet1 ResourceGroup = TestRG1
-Location = East US 
-AddressSpace = 10.1.0.0/16 
-SubnetName = Frontend 
-Subnet = 10.1.0.0/24 
+Location = East US
+AddressSpace = 10.1.0.0/16
+SubnetName = Frontend
+Subnet = 10.1.0.0/24
GatewaySubnet = 10.1.255.0/27 LocalNetworkGatewayName = Site1
-LNG Public IP = <On-premises VPN device IP address> 
-Local Address Prefixes = 10.101.0.0/24, 10.101.1.0/24
+LNG Public IP = <On-premises VPN device IP address>
+Local Address Prefixes = 10.0.0.0/24, 20.0.0.0/24
Gateway Name = VNet1GW PublicIP = VNet1GWPIP
-Gateway IP Config = gwipconfig1 
-VPNType = RouteBased 
-GatewayType = Vpn 
+Gateway IP Config = gwipconfig1
+VPNType = RouteBased
+GatewayType = Vpn
ConnectionName = VNet1toSite1 ```
To add a local network gateway with a single address prefix:
```azurepowershell-interactive New-AzLocalNetworkGateway -Name Site1 -ResourceGroupName TestRG1 `
- -Location 'East US' -GatewayIpAddress '23.99.221.164' -AddressPrefix '10.101.0.0/24'
+ -Location 'East US' -GatewayIpAddress '23.99.221.164' -AddressPrefix '10.0.0.0/24'
``` To add a local network gateway with multiple address prefixes: ```azurepowershell-interactive New-AzLocalNetworkGateway -Name Site1 -ResourceGroupName TestRG1 `
- -Location 'East US' -GatewayIpAddress '23.99.221.164' -AddressPrefix @('10.101.0.0/24','10.101.1.0/24')
+ -Location 'East US' -GatewayIpAddress '23.99.221.164' -AddressPrefix @('20.0.0.0/24','10.0.0.0/24')
``` To modify IP address prefixes for your local network gateway:
VPN Gateway currently only supports *Dynamic* Public IP address allocation. You
Request a Public IP address that will be assigned to your virtual network VPN gateway. ```azurepowershell-interactive
-$gwpip= New-AzPublicIpAddress -Name VNet1GWPIP -ResourceGroupName TestRG1 -Location 'East US' -AllocationMethod Dynamic
+$gwpip= New-AzPublicIpAddress -Name VNet1GWPIP -ResourceGroupName TestRG1 -Location 'East US' -AllocationMethod Static -Sku Standard
``` ## <a name="GatewayIPConfig"></a>4. Create the gateway IP addressing configuration
Create the virtual network VPN gateway.
Use the following values:
-* The *-GatewayType* for a Site-to-Site configuration is *Vpn*. The gateway type is always specific to the configuration that you are implementing. For example, other gateway configurations may require -GatewayType ExpressRoute.
+* The *-GatewayType* for a site-to-site configuration is *Vpn*. The gateway type is always specific to the configuration that you are implementing. For example, other gateway configurations may require -GatewayType ExpressRoute.
* The *-VpnType* can be *RouteBased* (referred to as a Dynamic Gateway in some documentation), or *PolicyBased* (referred to as a Static Gateway in some documentation). For more information about VPN gateway types, see [About VPN Gateway](vpn-gateway-about-vpngateways.md). * Select the Gateway SKU that you want to use. There are configuration limitations for certain SKUs. For more information, see [Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku). If you get an error when creating the VPN gateway regarding the -GatewaySku, verify that you have installed the latest version of the PowerShell cmdlets.