Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md | Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 03/01/2023 Last updated : 03/06/2023 Welcome to what's new in Azure Active Directory B2C documentation. This article - [Migrate applications using header-based authentication to Azure Active Directory B2C with Grit's app proxy](partner-grit-app-proxy.md) - [Configure Grit's biometric authentication with Azure Active Directory B2C](partner-grit-authentication.md)+- [Create and run your own custom policies in Azure Active Directory B2C](custom-policies-series-overview.md) +- [Write your first Azure Active Directory B2C custom policy - Hello World!](custom-policies-series-hello-world.md) +- [Collect and manipulate user inputs by using Azure AD B2C custom policy](custom-policies-series-collect-user-input.md) +- [Validate user inputs by using Azure Active Directory B2C custom policy](custom-policies-series-validate-user-input.md) +- [Create branching in user journey by using Azure Active Directory B2C custom policy](custom-policies-series-branch-user-journey.md) +- [Validate custom policy files by using TrustFrameworkPolicy schema](custom-policies-series-install-xml-extensions.md) +- [Call a REST API by using Azure Active Directory B2C custom policy](custom-policies-series-call-rest-api.md) +- [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) +- [Set up a sign-up and sign-in flow by using Azure Active Directory B2C custom policy](custom-policies-series-sign-up-or-sign-in.md) +- [Set up a sign-up and sign-in flow with a social account by using Azure Active Directory B2C custom policy](custom-policies-series-sign-up-or-sign-in-federation.md) +- [Manage administrator accounts in Azure Active Directory B2C](tenant-management-manage-administrator.md) +- [Manage emergency access accounts in Azure Active Directory B2C](tenant-management-emergency-access-account.md) +- [Review tenant creation permission in Azure Active Directory B2C](tenant-management-check-tenant-creation-permission.md) +- [Find tenant name and tenant ID in Azure Active Directory B2C](tenant-management-read-tenant-name.md) ### Updated articles |
active-directory-domain-services | Password Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/password-policy.md | -To manage user security in Azure Active Directory Domain Services (Azure AD DS), you can define fine-grained password policies that control account lockout settings or minimum password length and complexity. A default fine grained password policy is created and applied to all users in an Azure AD DS managed domain. To provide granular control and meet specific business or compliance needs, additional policies can be created and applied to specific groups of users. +To manage user security in Azure Active Directory Domain Services (Azure AD DS), you can define fine-grained password policies that control account lockout settings or minimum password length and complexity. A default fine grained password policy is created and applied to all users in an Azure AD DS managed domain. To provide granular control and meet specific business or compliance needs, additional policies can be created and applied to specific users or groups. This article shows you how to create and configure a fine-grained password policy in Azure AD DS using the Active Directory Administrative Center. To create a custom password policy, you use the Active Directory Administrative  -1. Password policies can only be applied to groups. In the **Locations** dialog, expand the domain name, such as *aaddscontoso.com*, then select an OU, such as **AADDC Users**. If you have a custom OU that contains a group of users you wish to apply, select that OU. +1. In the **Locations** dialog, expand the domain name, such as *aaddscontoso.com*, then select an OU, such as **AADDC Users**. If you have a custom OU that contains a group of users you wish to apply, select that OU.  -1. Type the name of the group you wish to apply the policy to, then select **Check Names** to validate that the group exists. +1. Type the name of the user or group you wish to apply the policy to. Select **Check Names** to validate the account.  -1. With the name of the group you selected now displayed in **Directly Applies To** section, select **OK** to save your custom password policy. +1. Click **OK** to save your custom password policy. ## Next steps |
active-directory | Concept Sspr Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-writeback.md | Password writeback provides the following features: * **Supports side-by-side domain-level deployment** using [Azure AD Connect](tutorial-enable-sspr-writeback.md) or [cloud sync](tutorial-enable-cloud-sync-sspr-writeback.md) to target different sets of users depending on their needs, including users who are in disconnected domains. > [!NOTE]-> Administrator accounts that exist within protected groups in on-premises AD can be used with password writeback. Administrators can change their password in the cloud but can't reset a forgotten password. For more information about protected groups, see [Protected accounts and groups in AD DS](/windows-server/identity/ad-ds/plan/security-best-practices/appendix-c--protected-accounts-and-groups-in-active-directory). +> The on-premises service account that handles password write-back requests cannot change the passwords for users that belong to protected groups. Administrators can change their password in the cloud but they cannot use password write-back to reset a forgotten password for their on-premises user. For more information about protected groups, see [Protected accounts and groups in AD DS](/windows-server/identity/ad-ds/plan/security-best-practices/appendix-c--protected-accounts-and-groups-in-active-directory). To get started with SSPR writeback, complete either one or both of the following tutorials: |
active-directory | Howto Authentication Passwordless Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-troubleshoot.md | The following events logs and registry key info is collected: ### Deployment Issues -To troubleshoot issues with deploying the Azure AD Kerberos Server, use the new PowerShell module included with Azure AD Connect. +To troubleshoot issues with deploying the Azure AD Kerberos Server, use the logs for the new [AzureADHybridAuthenticationManagement](https://www.powershellgallery.com/packages/AzureADHybridAuthenticationManagement) PowerShell module. #### Viewing the logs -The Azure AD Kerberos Server PowerShell cmdlets use the same logging as the standard Azure AD Connect Wizard. To view information or error details from the cmdlets, complete the following steps: +The Azure AD Kerberos Server PowerShell cmdlets in the [AzureADHybridAuthenticationManagement](https://www.powershellgallery.com/packages/AzureADHybridAuthenticationManagement) module use the same logging as the standard Azure AD Connect Wizard. To view information or error details from the cmdlets, complete the following steps: -1. On the Azure AD Connect Server, browse to `C:\ProgramData\AADConnect\`. This folder is hidden by default. +1. On the machine where the [AzureADHybridAuthenticationManagement](https://www.powershellgallery.com/packages/AzureADHybridAuthenticationManagement) module was used, browse to `C:\ProgramData\AADConnect\`. This folder is hidden by default. 1. Open and view the most recent `trace-*.log` file located in the directory. #### Viewing the Azure AD Kerberos Server Objects To view the Azure AD Kerberos Server Objects and verify they are in good order, complete the following steps: -1. On the Azure AD Connect Server, open PowerShell and navigate to `C:\Program Files\Microsoft Azure Active Directory Connect\AzureADKerberos\` +1. On the Azure AD Connect Server or any other machine where the [AzureADHybridAuthenticationManagement](https://www.powershellgallery.com/packages/AzureADHybridAuthenticationManagement) module is installed, open PowerShell and navigate to `C:\Program Files\Microsoft Azure Active Directory Connect\AzureADKerberos\` 1. Run the following PowerShell commands to view the Azure AD Kerberos Server from both Azure AD and on-premises AD DS. Replace *corp.contoso.com* with the name of your on-premises AD DS domain. |
active-directory | Howto Mfa Getstarted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md | description: Learn about deployment considerations and strategy for successful i Previously updated : 01/29/2023 Last updated : 03/06/2023 If the user does not have a backup method available, you can: - Provide them a Temporary Access Pass so that they can manage their own authentication methods. You can also provide a Temporary Access Pass to enable temporary access to resources. - Update their methods as an administrator. To do so, select the user in the Azure portal, then select Authentication methods and update their methods.-User communications + ## Plan integration with on-premises systems |
active-directory | Howto Vm Sign In Azure Ad Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md | Ensure that your client meets the following requirements: - TCP connectivity from the client to either the public or private IP address of the VM. (ProxyCommand or SSH forwarding to a machine with connectivity also works.) > [!IMPORTANT]-> SSH clients based on PuTTY don't support OpenSSH certificates and can't be used to log in with Azure AD OpenSSH certificate-based authentication. +> SSH clients based on PuTTY now supports OpenSSH certificates and can be used to log in with Azure AD OpenSSH certificate-based authentication. ## Enable Azure AD login for a Linux VM in Azure |
active-directory | Cross Tenant Synchronization Configure Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md | |
active-directory | Cross Tenant Synchronization Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md | |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | |
active-directory | Cross Tenant Synchronization Topology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-topology.md | |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/overview.md | |
active-directory | Acunetix 360 Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/acunetix-360-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Acunetix 360 for automatic user provisioning with Azure Active Directory | Microsoft Docs' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Acunetix 360. +++writer: twimmers ++ms.assetid: cb0c2e2c-ade9-4e6b-9ce5-d7c7d2743d90 ++++ Last updated : 03/06/2023++++# Tutorial: Configure Acunetix 360 for automatic user provisioning ++This tutorial describes the steps you need to perform in both Acunetix 360 and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Acunetix 360](https://www.acunetix.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Acunetix 360. +> * Remove users in Acunetix 360 when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Acunetix 360. +> * Provision groups and group memberships in Acunetix 360 +> * [Single sign-on](acunetix-360-tutorial.md) to Acunetix 360 (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An administrator account with Acunetix 360. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Acunetix 360](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Acunetix 360 to support provisioning with Azure AD ++1. Log in to [Acunetix 360 admin console](https://online.acunetix360.com/). +1. Click on profile logo and navigate to **API Settings**. +1. Enter your **Current Password** and then click on **Submit**. +1. Copy and save the **Token**.This value will be entered in the **Secret Token** field in the Provisioning tab of your Acunetix 360 application in the Azure portal. + >[!NOTE] + >Click on **Reset API Token** in order to reset the Token. +1. And `https://online.acunetix360.com/scim/v2` will be entered in the **Tenant Url** field in the Provisioning tab of your Acunetix 360 application in the Azure portal. ++++## Step 3. Add Acunetix 360 from the Azure AD application gallery ++Add Acunetix 360 from the Azure AD application gallery to start managing provisioning to Acunetix 360. If you have previously setup Acunetix 360 for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Acunetix 360 ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Acunetix 360 in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Acunetix 360**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Acunetix 360 Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Acunetix 360. If the connection fails, ensure your Acunetix 360 account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Acunetix 360**. ++1. Review the user attributes that are synchronized from Azure AD to Acunetix 360 in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Acunetix 360 for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Acunetix 360 API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Acunetix 360| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |emails[type eq "work"].value|String||✓ + |name.givenName|String||✓ + |name.familyName|String||✓ + |phoneNumbers[type eq "mobile"].value|String|| ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Acunetix 360**. ++1. Review the group attributes that are synchronized from Azure AD to Acunetix 360 in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Acunetix 360 for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Acunetix 360| + ||||| + |displayName|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Acunetix 360, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Acunetix 360 by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Ardoq Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ardoq-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Ardoq for automatic user provisioning with Azure Active Directory | Microsoft Docs' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Ardoq. +++writer: twimmers ++ms.assetid: 0339e63a-5262-4019-a85d-18c9617fc4b3 ++++ Last updated : 03/02/2023++++# Tutorial: Configure Ardoq for automatic user provisioning ++This tutorial describes the steps you need to perform in both Ardoq and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Ardoq](https://www.ardoq.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Ardoq. +> * Remove users in Ardoq when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Ardoq. +> * [Single sign-on](ardoq-tutorial.md) to Ardoq (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An administrator account with Ardoq. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Ardoq](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Ardoq to support provisioning with Azure AD ++1. Log in to [Ardoq](https://aad.ardoq.com/). +1. In the left menu click on profile logo and, navigate to **Organization Settings->Manage Organization->Manage SCIM Token**. +1. Click on **Generate new**. +1. Copy and save the **Token**.This value will be entered in the **Secret Token** field in the Provisioning tab of your Ardoq application in the Azure portal. +1. And `https://aad.ardoq.com/api/scim/v2` will be entered in the **Tenant Url** field in the Provisioning tab of your Ardoq application in the Azure portal. ++## Step 3. Add Ardoq from the Azure AD application gallery ++Add Ardoq from the Azure AD application gallery to start managing provisioning to Ardoq. If you have previously setup Ardoq for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Ardoq ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Ardoq in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Ardoq**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Ardoq Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Ardoq. If the connection fails, ensure your Ardoq account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Ardoq**. ++1. Review the user attributes that are synchronized from Azure AD to Ardoq in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Ardoq for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Ardoq API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Ardoq| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |displayName|String||✓ + |roles[primary eq "True"].value|String||✓ + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Ardoq, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users that you would like to provision to Ardoq by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Netsparker Enterprise Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netsparker-enterprise-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Netsparker Enterprise for automatic user provisioning with Azure Active Directory | Microsoft Docs' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Netsparker Enterprise. +++writer: twimmers ++ms.assetid: 6e951318-213e-40d1-9947-88242059f877 ++++ Last updated : 03/02/2023++++# Tutorial: Configure Netsparker Enterprise for automatic user provisioning ++This tutorial describes the steps you need to perform in both Netsparker Enterprise and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Netsparker Enterprise](https://www.netsparker.com/product/enterprise/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Netsparker Enterprise. +> * Remove users in Netsparker Enterprise when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Netsparker Enterprise. +> * Provision groups and group memberships in Netsparker Enterprise +> * [Single sign-on](netsparker-enterprise-tutorial.md) to Netsparker Enterprise (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An administrator account with Netsparker Enterprise. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Netsparker Enterprise](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Netsparker Enterprise to support provisioning with Azure AD ++1. Log in to [Netsparker Enterprise admin console](https://www.netsparkercloud.com). +1. Click on profile logo and navigate to **API Settings**. +1. Enter your **Current Password** and then click on **Submit**. +1. Copy and save the **Token**.This value will be entered in the **Secret Token** field in the Provisioning tab of your Netsparker Enterprise application in the Azure portal. + >[!NOTE] + >Click on **Reset API Token** in order to reset the Token. +1. And `https://www.netsparkercloud.com/scim/v2` will be entered in the **Tenant Url** field in the Provisioning tab of your Netsparker Enterprise application in the Azure portal. ++## Step 3. Add Netsparker Enterprise from the Azure AD application gallery ++Add Netsparker Enterprise from the Azure AD application gallery to start managing provisioning to Netsparker Enterprise. If you have previously setup Netsparker Enterprise for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Netsparker Enterprise ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Netsparker Enterprise in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Netsparker Enterprise**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Netsparker Enterprise Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Netsparker Enterprise. If the connection fails, ensure your Netsparker Enterprise account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Netsparker Enterprise**. ++1. Review the user attributes that are synchronized from Azure AD to Netsparker Enterprise in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Netsparker Enterprise for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Netsparker Enterprise API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Netsparker Enterprise| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |emails[type eq "work"].value|String||✓ + |name.givenName|String||✓ + |name.familyName|String||✓ + |phoneNumbers[type eq "mobile"].value|String|| ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Netsparker Enterprise**. ++1. Review the group attributes that are synchronized from Azure AD to Netsparker Enterprise in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Netsparker Enterprise for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Netsparker Enterprise| + ||||| + |displayName|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Netsparker Enterprise, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Netsparker Enterprise by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Oracle Idcs For Peoplesoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-idcs-for-peoplesoft-tutorial.md | In this section, you test your Azure AD single sign-on configuration with follow ## Next steps -Once you configure Oracle IDCS for PeopleSoft you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). +Once you configure Oracle IDCS for PeopleSoft you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad). |
aks | Kubernetes Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-service-principal.md | Azure PowerShell version 5.0.0 or later. Run `Get-InstalledModule -Name Az` to f ### [Azure CLI](#tab/azure-cli) -To manually create a service principal with the Azure CLI, use the [az ad sp create-for-rbac][az-ad-sp-create] command. +To manually create a service principal with the Azure CLI, use the [`az ad sp create-for-rbac`][az-ad-sp-create] command. ```azurecli-interactive az ad sp create-for-rbac --name myAKSClusterServicePrincipal The output is similar to the following example. Copy the values for `appId` and ### [Azure PowerShell](#tab/azure-powershell) -To manually create a service principal with Azure PowerShell, use the [New-AzADServicePrincipal][new-azadserviceprincipal] command. +To manually create a service principal with Azure PowerShell, use the [`New-AzADServicePrincipal`][new-azadserviceprincipal] command. ```azurepowershell-interactive New-AzADServicePrincipal -DisplayName myAKSClusterServicePrincipal -OutVariable sp For more information, see [Create an Azure service principal with Azure PowerShe ### [Azure CLI](#tab/azure-cli) -To use an existing service principal when you create an AKS cluster using the [az aks create][az-aks-create] command, use the `--service-principal` and `--client-secret` parameters to specify the `appId` and `password` from the output of the [az ad sp create-for-rbac][az-ad-sp-create] command: +To use an existing service principal when you create an AKS cluster using the [`az aks create`][az-aks-create] command, use the `--service-principal` and `--client-secret` parameters to specify the `appId` and `password` from the output of the [`az ad sp create-for-rbac`][az-ad-sp-create] command: ```azurecli-interactive az aks create \ The service principal for the AKS cluster can be used to access other resources. ### [Azure CLI](#tab/azure-cli) -To delegate permissions, create a role assignment using the [az role assignment create][az-role-assignment-create] command. Assign the `appId` to a particular scope, such as a resource group or virtual network resource. A role then defines what permissions the service principal has on the resource, as shown in the following example: +To delegate permissions, create a role assignment using the [`az role assignment create`][az-role-assignment-create] command. Assign the `appId` to a particular scope, such as a resource group or virtual network resource. A role then defines what permissions the service principal has on the resource, as shown in the following example: ```azurecli az role assignment create --assignee <appId> --scope <resourceScope> --role Contributor The `--scope` for a resource needs to be a full resource ID, such as */subscript ### [Azure PowerShell](#tab/azure-powershell) -To delegate permissions, create a role assignment using the [New-AzRoleAssignment][new-azroleassignment] command. Assign the `ApplicationId` to a particular scope, such as a resource group or virtual network resource. A role then defines what permissions the service principal has on the resource, as shown in the following example: +To delegate permissions, create a role assignment using the [`New-AzRoleAssignment`][new-azroleassignment] command. Assign the `ApplicationId` to a particular scope, such as a resource group or virtual network resource. A role then defines what permissions the service principal has on the resource, as shown in the following example: ```azurepowershell-interactive New-AzRoleAssignment -ApplicationId <ApplicationId> -Scope <resourceScope> -RoleDefinitionName Contributor The following sections detail common delegations that you may need to assign. ### [Azure CLI](#tab/azure-cli) -If you use Azure Container Registry (ACR) as your container image store, you need to grant permissions to the service principal for your AKS cluster to read and pull images. Currently, the recommended configuration is to use the [az aks create][az-aks-create] or [az aks update][az-aks-update] command to integrate with a registry and assign the appropriate role for the service principal. For detailed steps, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-to-acr]. +If you use Azure Container Registry (ACR) as your container image store, you need to grant permissions to the service principal for your AKS cluster to read and pull images. Currently, the recommended configuration is to use the [`az aks create`][az-aks-create] or [`az aks update`][az-aks-update] command to integrate with a registry and assign the appropriate role for the service principal. For detailed steps, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-to-acr]. ### [Azure PowerShell](#tab/azure-powershell) -If you use Azure Container Registry (ACR) as your container image store, you need to grant permissions to the service principal for your AKS cluster to read and pull images. Currently, the recommended configuration is to use the [New-AzAksCluster][new-azakscluster] or [Set-AzAksCluster][set-azakscluster] command to integrate with a registry and assign the appropriate role for the service principal. For detailed steps, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-to-acr]. +If you use Azure Container Registry (ACR) as your container image store, you need to grant permissions to the service principal for your AKS cluster to read and pull images. Currently, the recommended configuration is to use the [`New-AzAksCluster`][new-azakscluster] or [`Set-AzAksCluster`][set-azakscluster] command to integrate with a registry and assign the appropriate role for the service principal. For detailed steps, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-to-acr]. When using AKS and an Azure AD service principal, consider the following: - Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. - When you specify the service principal **Client ID**, use the value of the `appId`. - On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json`-- When you use the [az aks create][az-aks-create] command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/aksServicePrincipal.json` on the machine used to run the command.-- If you don't specify a service principal with AKS CLI commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used.-- You can optionally remove the `aksServicePrincipal.json` file, and AKS creates a new service principal.-- When you delete an AKS cluster that was created by [az aks create][az-aks-create], the service principal created automatically isn't deleted.- - To delete the service principal, query for your clusters *servicePrincipalProfile.clientId* and then delete it using the [az ad sp delete][az-ad-sp-delete] command. Replace the values for the `-g` parameter for the resource group name, and `-n` parameter for the cluster name: +- When you delete an AKS cluster that was created by [`az aks create`][az-aks-create], the service principal created automatically isn't deleted. + - To delete the service principal, query for your clusters *servicePrincipalProfile.clientId* and then delete it using the [`az ad sp delete`][az-ad-sp-delete] command. Replace the values for the `-g` parameter for the resource group name, and `-n` parameter for the cluster name: - ```azurecli - az ad sp delete --id $(az aks show -g myResourceGroup -n myAKSCluster --query servicePrincipalProfile.clientId -o tsv) - ``` + ```azurecli + az ad sp delete --id $(az aks show -g myResourceGroup -n myAKSCluster --query servicePrincipalProfile.clientId -o tsv) + ``` ### [Azure PowerShell](#tab/azure-powershell) When using AKS and an Azure AD service principal, consider the following: - Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. - When you specify the service principal **Client ID**, use the value of the `ApplicationId`. - On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json`-- When you use the [New-AzAksCluster][new-azakscluster] command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/acsServicePrincipal.json` on the machine used to run the command.-- If you don't specify a service principal with AKS PowerShell commands, the default service principal located at `~/.azure/acsServicePrincipal.json` is used.-- You can optionally remove the `acsServicePrincipal.json` file, and AKS creates a new service principal.-- When you delete an AKS cluster that was created by [New-AzAksCluster][new-azakscluster], the service principal created automatically isn't deleted.- - To delete the service principal, query for your clusters *ServicePrincipalProfile.ClientId* and then delete it using the [Remove-AzADServicePrincipal][remove-azadserviceprincipal] command. Replace the values for the `-ResourceGroupName` parameter for the resource group name, and `-Name` parameter for the cluster name: -- ```azurepowershell-interactive - $ClientId = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster ).ServicePrincipalProfile.ClientId - Remove-AzADServicePrincipal -ApplicationId $ClientId - ``` +- When you delete an AKS cluster that was created by [`New-AzAksCluster`][new-azakscluster], the service principal created automatically isn't deleted. + - To delete the service principal, query for your clusters *ServicePrincipalProfile.ClientId* and then delete it using the [`Remove-AzADServicePrincipal`][remove-azadserviceprincipal] command. Replace the values for the `-ResourceGroupName` parameter for the resource group name, and `-Name` parameter for the cluster name: ++ ```azurepowershell-interactive + $ClientId = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster ).ServicePrincipalProfile.ClientId + Remove-AzADServicePrincipal -ApplicationId $ClientId + ``` + ## Troubleshoot ### [Azure CLI](#tab/azure-cli) -The service principal credentials for an AKS cluster are cached by the Azure CLI. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [az aks create][az-aks-create] may indicate a problem with the cached service principal credentials: +The service principal credentials for an AKS cluster are cached by the Azure CLI. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [`az aks create`][az-aks-create] may indicate a problem with the cached service principal credentials: -```console +```azurecli Operation failed with status: 'Bad Request'. Details: The credentials in ServicePrincipalProfile were invalid. Please see https://aka.ms/aks-sp-help for more details. (Details: adal: Refresh request failed. Status Code = '401'. ``` -Check the age of the credentials file by running the following command: +Check the expiration date of your service principal credentials using the [`az ad app credential list`][az-ad-app-credential-list] command with the `"[].endDateTime"` query. -```console -ls -la $HOME/.azure/aksServicePrincipal.json +```azurecli +az ad app credential list --id <app-id> --query "[].endDateTime" -o tsv ``` -The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and retry deploying the AKS cluster. +The default expiration time for the service principal credentials is one year. If your credentials are older than one year, you can [reset the existing credentials](/update-credentials#reset-the-existing-service-principal-credentials) or [create a new service principal](/update-credentials#create-a-new-service-principal). **General Azure CLI troubleshooting** The default expiration time for the service principal credentials is one year. I ### [Azure PowerShell](#tab/azure-powershell) -The service principal credentials for an AKS cluster are cached by Azure PowerShell. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [New-AzAksCluster][new-azakscluster] may indicate a problem with the cached service principal credentials: +The service principal credentials for an AKS cluster are cached by Azure PowerShell. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [`New-AzAksCluster`][new-azakscluster] may indicate a problem with the cached service principal credentials: -```console +```azurepowershell-interactive Operation failed with status: 'Bad Request'. Details: The credentials in ServicePrincipalProfile were invalid. Please see https://aka.ms/aks-sp-help for more details. (Details: adal: Refresh request failed. Status Code = '401'. ``` -Check the age of the credentials file by running the following command: +Check the expiration date of your service principal credentials using the [Get-AzADAppCredential][get-azadappcredential] command. The output will show you the `StartDateTime` of your credentials. ```azurepowershell-interactive-Get-ChildItem -Path $HOME/.azure/aksServicePrincipal.json +Get-AzADAppCredential -ApplicationId <ApplicationId> ``` -The default expiration time for the service principal credentials is one year. If your *aksServicePrincipal.json* file is older than one year, delete the file and retry deploying the AKS cluster. +The default expiration time for the service principal credentials is one year. If your credentials are older than one year, you can [reset the existing credentials](/update-credentials#reset-the-existing-service-principal-credentials) or [create a new service principal](/update-credentials#create-a-new-service-principal). For information on how to update the credentials, see [Update or rotate the cred [acr-intro]: ../container-registry/container-registry-intro.md [az-ad-sp-create]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac [az-ad-sp-delete]: /cli/azure/ad/sp#az_ad_sp_delete+[az-ad-app-credential-list]: /cli/azure/ad/app/credential#az_ad_app_credential_list [azure-load-balancer-overview]: ../load-balancer/load-balancer-overview.md [install-azure-cli]: /cli/azure/install-azure-cli [service-principal]:../active-directory/develop/app-objects-and-service-principals.md For information on how to update the credentials, see [Update or rotate the cred [install-the-azure-az-powershell-module]: /powershell/azure/install-az-ps [new-azakscluster]: /powershell/module/az.aks/new-azakscluster [new-azadserviceprincipal]: /powershell/module/az.resources/new-azadserviceprincipal+[get-azadappcredential]: /powershell/module/az.resources/get-azadappcredential [create-an-azure-service-principal-with-azure-powershell]: /powershell/azure/create-azure-service-principal-azureps [new-azroleassignment]: /powershell/module/az.resources/new-azroleassignment [set-azakscluster]: /powershell/module/az.aks/set-azakscluster |
aks | Use Kms Etcd Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md | Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Service (AKS) description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 01/17/2023 Last updated : 02/20/2023 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster Use the following command to disable KMS on existing cluster. az aks update --name myAKSCluster --resource-group MyResourceGroup --disable-azure-keyvault-kms ``` -Use the following command to update all secrets. Otherwise, the old secrets will still be encrypted with the previous key. For larger clusters, you may want to subdivide the secrets by namespace or script an update. +Use the following command to update all secrets. Otherwise, the old secrets will still be encrypted with the previous key and the encrypt/decrypt permission on key vault is still required. For larger clusters, you may want to subdivide the secrets by namespace or script an update. ```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f - |
aks | Use Multiple Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md | The following limitations apply when you create and manage AKS clusters that sup ## Create an AKS cluster > [!IMPORTANT]-> If you run a single system node pool for your AKS cluster in a production environment, we recommend you use at least three nodes for the node pool. +> If you run a single system node pool for your AKS cluster in a production environment, we recommend you use at least three nodes for the node pool. If one node goes down, you lose control plane resources and redundancy is compromised. You can mitigate this risk by having more control plane nodes. To get started, create an AKS cluster with a single node pool. The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *eastus* region. An AKS cluster named *myAKSCluster* is then created using the [`az aks create`][az-aks-create] command. |
aks | Use System Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md | In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped t > [!Important] > If you run a single system node pool for your AKS cluster in a production environment, we recommend you use at least three nodes for the node pool. +This article explains how to manage system node pools in AKS. For information about how to use multiple node pools, see [use multiple node pools][use-multiple-node-pools]. + ## Before you begin ### [Azure CLI](#tab/azure-cli) You need the Azure PowerShell version 7.5.0 or later installed and configured. R The following limitations apply when you create and manage AKS clusters that support system node pools. -* See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions]. -* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools, the length must be between 1 and 12 characters. For Windows node pools, the length must be between one and six characters. +* See [Quotas, VM size restrictions, and region availability in AKS][quotas-skus-regions]. * An API version of 2020-03-01 or greater must be used to set a node pool mode. Clusters created on API versions older than 2020-03-01 contain only user node pools, but can be migrated to contain system node pools by following [update pool mode steps](#update-existing-cluster-system-and-user-node-pools).+* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools, the length must be between 1 and 12 characters. For Windows node pools, the length must be between one and six characters. * The mode of a node pool is a required property and must be explicitly set when using ARM templates or direct API calls. ## System and user node pools You can enforce this behavior by creating a dedicated system node pool. Use the System node pools have the following restrictions: +* System node pools must support at least 30 pods as described by the [minimum and maximum value formula for pods][maximum-pods]. * System pools osType must be Linux. * User node pools osType may be Linux or Windows. * System pools must contain at least one node, and user node pools may contain zero or more nodes. * System node pools require a VM SKU of at least 2 vCPUs and 4 GB memory. But burstable-VM(B series) isn't recommended. * A minimum of two nodes 4 vCPUs is recommended (for example, Standard_DS4_v2), especially for large clusters (Multiple CoreDNS Pod replicas, 3-4+ add-ons, etc.).-* System node pools must support at least 30 pods as described by the [minimum and maximum value formula for pods][maximum-pods]. * Spot node pools require user node pools. * Adding another system node pool or changing which node pool is a system node pool *does not* automatically move system pods. System pods can continue to run on the same node pool, even if you change it to a user node pool. If you delete or scale down a node pool running system pods that were previously a system node pool, those system pods are redeployed with preferred scheduling to the new system node pool. Remove-AzResourceGroup -Name myResourceGroup ## Next steps -In this article, you learned how to create and manage system node pools in an AKS cluster. For more information about how to use multiple node pools, see [use multiple node pools][use-multiple-node-pools]. +In this article, you learned how to create and manage system node pools in an AKS cluster. For information about how to start and stop AKS node pools, see [start and stop AKS node pools][start-stop-nodepools]. <!-- EXTERNAL LINKS --> [kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ In this article, you learned how to create and manage system node pools in an AK [use-multiple-node-pools]: use-multiple-node-pools.md [maximum-pods]: configure-azure-cni.md#maximum-pods-per-node [update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools+[start-stop-nodepools]: /start-stop-nodepools.md |
api-management | Api Management Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md | Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct | Direct management API | No | Yes | Yes | Yes | Yes | | Azure Monitor logs and metrics | No | Yes | Yes | Yes | Yes | | Static IP | No | Yes | Yes | Yes | Yes |-| [WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes | -| [GraphQL APIs](graphql-api.md)<sup>5</sup> | Yes | Yes | Yes | Yes | Yes | -| [Synthetic GraphQL APIs (preview)](graphql-schema-resolve-api.md) | No | Yes | Yes | Yes | Yes | +| [Pass-through WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes | +| [Pass-through GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes | +| [Synthetic GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes | <sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/>-<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key. <br/> -<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier. +<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key. |
api-management | Api Management Gateways Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md | The following table compares features available in the managed gateway versus th | [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ | | [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | | [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ |-| [Passthrough GraphQL](graphql-api.md) | ✔️ | ✔️<sup>1</sup> | ❌ | -| [Synthetic GraphQL](graphql-schema-resolve-api.md) | ✔️ | ❌ | ❌ | -| [Passthrough WebSocket](websocket-api.md) | ✔️ | ❌ | ❌ | --<sup>1</sup> GraphQL subscriptions aren't supported in the Consumption tier. +| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ❌ | +| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ❌ | +| [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ❌ | ### Policies Managed and self-hosted gateways support all available [policies](api-management | Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> | | | -- | -- | - | | [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |-| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ❌ | ❌ | +| [GraphQL resolvers](api-management-policies.md#graphql-resolver-policies) and [GraphQL validation](api-management-policies.md#validation-policies)| ✔️ | ✔️ | ❌ | +| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ | ❌ | | [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup>-| [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ | <sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/> <sup>2</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/> |
api-management | Api Management Howto Deploy Multi Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md | description: Learn how to deploy a Premium tier Azure API Management instance to Previously updated : 09/27/2022 Last updated : 01/26/2023 When adding a region, you configure: ## <a name="remove-region"> </a>Remove an API Management service region 1. In the Azure portal, navigate to your API Management service and select **Locations** from the left menu.-2. For the location you would like to remove, select the context menu using the **...** button at the right end of the table. Select **Delete**. -3. Confirm the deletion and select **Save** to apply the changes. +1. For the location you would like to remove, select the context menu using the **...** button at the right end of the table. Select **Delete**. +1. Confirm the deletion and select **Save** to apply the changes. + ## <a name="route-backend"> </a>Route API calls to regional backend services By default, each API routes requests to a single backend service URL. Even if yo To take advantage of geographical distribution of your system, you should have backend services deployed in the same regions as Azure API Management instances. Then, using policies and `@(context.Deployment.Region)` property, you can route the traffic to local instances of your backend. -> [!TIP] -> Optionally set the `disableGateway` property in a regional gateway to disable routing of API traffic there. For example, temporarily disable a regional gateway when testing or updating a regional backend service. - 1. Navigate to your Azure API Management instance and select **APIs** from the left menu. 2. Select your desired API. 3. Select **Code editor** from the arrow dropdown in the **Inbound processing**. API Management routes the requests to a regional gateway based on [the lowest la 1. [Configure the API Management regional status endpoints in Traffic Manager](../traffic-manager/traffic-manager-monitoring.md). The regional status endpoints follow the URL pattern of `https://<service-name>-<region>-01.regional.azure-api.net/status-0123456789abcdef`, for example `https://contoso-westus2-01.regional.azure-api.net/status-0123456789abcdef`. 1. Specify [the routing method](../traffic-manager/traffic-manager-routing-methods.md) of the Traffic Manager. +## Disable routing to a regional gateway ++Under some conditions, you might need to temporarily disable routing to one of the regional gateways. For example: ++* After adding a new region, to keep it disabled while you configure and test the regional backend service +* During regular backend maintenance in a region +* To redirect traffic to other regions during a planned disaster recovery drill that simulates an unavailable region, or during a regional failure ++To disable routing to a regional gateway in your API Management instance, update the gateway's `disableGateway` property value to `true`. You can set the value using the [Create or update service](/rest/api/apimanagement/current-glet, or other Azure tools. + +To disable a regional gateway using the Azure CLI: ++1. Use the [az apim show](/cli/azure/apim#az-apim-show) command to show the locations, gateway status, and regional URLs configured for the API Management instance. + ```azurecli + az apim show --name contoso --resource-group myResourceGroup \ + --query "additionalLocations[].{Location:location,Disabled:disableGateway,Url:gatewayRegionalUrl}" \ + --output table + ``` + Example output: ++ ``` + Location Disabled Url + - - + West US 2 True https://contoso-westus2-01.regional.azure-api.net + West Europe True https://contoso-westeurope-01.regional.azure-api.net + ``` +1. Use the [az apim update](/cli/azure/apim#az-apim-update) command to disable the gateway in an available location, such as West US 2. + ```azurecli + az apim update --name contoso --resource-group myResourceGroup \ + --set additionalLocations[location="West US 2"].disableGateway=true + ``` ++ The update may take a few minutes. ++1. Verify that traffic directed to the regional gateway URL is redirected to another region. + +To restore routing to the regional gateway, set the value of `disableGateway` to `false`. + ## Virtual networking This section provides considerations for multi-region deployments when the API Management instance is injected in a virtual network. |
api-management | Api Management Howto Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md | When configuring a policy, you must first select the scope at which the policy a For more information, see [Set or edit policies](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order). +### GraphQL resolver policies ++In API Management, a [GraphQL resolver](configure-graphql-resolver.md) is configured using policies scoped to a specific operation type and field in a [GraphQL schema](graphql-apis-overview.md#resolvers). ++* Currently, API Management supports GraphQL resolvers that specify HTTP data sources. Configure a single [`http-data-source`](http-data-source-policy.md) policy with elements to specify a request to (and optionally response from) an HTTP data source. +* You can't include a resolver policy in policy definitions at other scopes such as API, product, or all APIs. It also doesn't inherit policies configured at other scopes. +* The gateway evaluates a resolver-scoped policy *after* any configured `inbound` and `backend` policies in the policy execution pipeline. ++For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md). + ## Examples ### Apply policies specified at different scopes |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | More information about policies: - [Send message to Pub/Sub topic](publish-to-dapr-policy.md): Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. - [Trigger output binding](invoke-dapr-binding-policy.md): Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. -## GraphQL API policies -- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. -- [Set GraphQL resolver](set-graphql-resolver-policy.md) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.+## GraphQL resolver policies +- [HTTP data source for resolver](http-data-source-policy.md) - Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. +- [Publish event to GraphQL subscription](publish-event-policy.md) - Publishes an event to one or more subscriptions specified in a GraphQL API schema. Used in the `http-response` element of the `http-data-source` policy ## Transformation policies - [Convert JSON to XML](json-to-xml-policy.md) - Converts request or response body from JSON to XML. More information about policies: ## Validation policies - [Validate content](validate-content-policy.md) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML.+- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. - [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema. - [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in |
api-management | Api Management Policy Expressions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md | The `context` variable is implicitly available in every policy [expression](api- |Context Variable|Allowed methods, properties, and parameter values| |-|-|-|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Product`](#ref-context-product)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`| +|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`GraphQL`](#ref-context-graphql)<br /><br />[`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`| |<a id="ref-context-api"></a>`context.Api`|`Id`: `string`<br /><br /> `IsCurrentRevision`: `bool`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Revision`: `string`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Version`: `string` | |<a id="ref-context-deployment"></a>`context.Deployment`|[`Gateway`](#ref-context-gateway)<br /><br /> `GatewayId`: `string` (returns 'managed' for managed gateways)<br /><br /> `Region`: `string`<br /><br /> `ServiceId`: `string`<br /><br /> `ServiceName`: `string`<br /><br /> `Certificates`: `IReadOnlyDictionary<string, X509Certificate2>`| |<a id="ref-context-gateway"></a>`context.Deployment.Gateway`|`Id`: `string` (returns 'managed' for managed gateways)<br /><br /> `InstanceId`: `string` (returns 'managed' for managed gateways)<br /><br /> `IsManaged`: `bool`|+|<a id="ref-context-graphql"></a>`context.GraphQL`|`GraphQLArguments`: `IGraphQLDataObject`<br /><br /> `Parent`: `IGraphQLDataObject`<br/><br/>[Examples](configure-graphql-resolver.md#graphql-context)| |<a id="ref-context-lasterror"></a>`context.LastError`|`Source`: `string`<br /><br /> `Reason`: `string`<br /><br /> `Message`: `string`<br /><br /> `Scope`: `string`<br /><br /> `Section`: `string`<br /><br /> `Path`: `string`<br /><br /> `PolicyId`: `string`<br /><br /> For more information about `context.LastError`, see [Error handling](api-management-error-handling-policies.md).| |<a id="ref-context-operation"></a>`context.Operation`|`Id`: `string`<br /><br /> `Method`: `string`<br /><br /> `Name`: `string`<br /><br /> `UrlTemplate`: `string`| |<a id="ref-context-product"></a>`context.Product`|`Apis`: `IEnumerable<`[`IApi`](#ref-iapi)`>`<br /><br /> `ApprovalRequired`: `bool`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `State`: `enum ProductState {NotPublished, Published}`<br /><br /> `SubscriptionLimit`: `int?`<br /><br /> `SubscriptionRequired`: `bool`| The `context` variable is implicitly available in every policy [expression](api- |<a id="ref-context-subscription"></a>`context.Subscription`|`CreatedDate`: `DateTime`<br /><br /> `EndDate`: `DateTime?`<br /><br /> `Id`: `string`<br /><br /> `Key`: `string`<br /><br /> `Name`: `string`<br /><br /> `PrimaryKey`: `string`<br /><br /> `SecondaryKey`: `string`<br /><br /> `StartDate`: `DateTime?`| |<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`| |<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)|+|<a id="ref-igraphqldataobject"></a>`IGraphQLDataObject`|TBD<br /><br />| |<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`| |<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(bool preserveContent = false): Where T: string, byte[], JObject, JToken, JArray, XNode, XElement, XDocument` <br /><br /> - The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods read a request or response message body in specified type `T`. <br/><br/> - Or - <br/><br/>`AsFormUrlEncodedContent(bool preserveContent = false)` <br/></br>- The `context.Request.Body.AsFormUrlEncodedContent()` and `context.Response.Body.AsFormUrlEncodedContent()` methods read URL-encoded form data in a request or response message body and return an `IDictionary<string, IList<string>` object. The decoded object supports `IDictionary` operations and the following expressions: `ToQueryString()`, `JsonConvert.SerializeObject()`, `ToFormUrlEncodedContent().` <br/><br/> By default, the `As<T>` and `AsFormUrlEncodedContent()` methods:<br /><ul><li>Use the original message body stream.</li><li>Render it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as shown in examples for the [set-body](set-body-policy.md#examples) policy.| |<a id="ref-iprivateendpointconnection"></a>`IPrivateEndpointConnection`|`Name`: `string`<br /><br /> `GroupId`: `string`<br /><br /> `MemberName`: `string`<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).| |
api-management | Configure Graphql Resolver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-graphql-resolver.md | + + Title: Configure GraphQL resolver in Azure API Management +description: Configure a GraphQL resolver in Azure AI Management for a field in an object type specified in a GraphQL schema +++++ Last updated : 02/22/2023++++# Configure a GraphQL resolver ++Configure a resolver to retrieve or set data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently, API Management supports resolvers that use HTTP-based data sources (REST or SOAP APIs). ++* A resolver is a resource containing a policy definition that's invoked only when a matching object type and field is executed. +* Each resolver resolves data for a single field. To resolve data for multiple fields, configure a separate resolver for each. +* Resolver-scoped policies are evaluated *after* any `inbound` and `backend` policies in the policy execution pipeline. They don't inherit policies from other scopes. For more information, see [Policies in API Management](api-management-howto-policies.md). +++> [!IMPORTANT] +> * If you use the preview `set-graphql-resolver` policy in policy definitions, you should migrate to the managed resolvers described in this article. +> * After you configure a managed resolver for a GraphQL field, the gateway will skip the `set-graphql-resolver` policy in any policy definitions. You can't combine use of managed resolvers and the `set-graphql-resolver` policy in your API Management instance. ++## Prerequisites ++- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). +- Import a [pass-through](graphql-api.md) or [synthetic](graphql-schema-resolve-api.md) GraphQL API. ++## Create a resolver ++1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. ++1. In the left menu, select **APIs** and then the name of your GraphQL API. +1. On the **Design** tab, review the schema for a field in an object type where you want to configure a resolver. + 1. Select a field, and then in the left margin, hover the pointer. + 1. Select **+ Add Resolver**. ++ :::image type="content" source="media/configure-graphql-resolver/add-resolver.png" alt-text="Screenshot of adding a resolver from a field in GraphQL schema in the portal."::: +1. On the **Create Resolver** page, update the **Name** property if you want to, optionally enter a **Description**, and confirm or update the **Type** and **Field** selections. +1. In the **Resolver policy** editor, update the [`http-data-source`](http-data-source-policy.md) policy with child elements for your scenario. + 1. Update the required `http-request` element with policies to transform the GraphQL operation to an HTTP request. + 1. Optionally add an `http-response` element, and add child policies to transform the HTTP response of the resolver. If the `http-response` element isn't specified, the response is returned as a raw string. + 1. Select **Create**. + + :::image type="content" source="media/configure-graphql-resolver/configure-resolver-policy.png" alt-text="Screenshot of resolver policy editor in the portal." lightbox="media/configure-graphql-resolver/configure-resolver-policy.png"::: ++1. The resolver is attached to the field. Go to the **Resolvers** tab to list and manage the resolvers configured for the API. ++ :::image type="content" source="media/configure-graphql-resolver/list-resolvers.png" alt-text="Screenshot of the resolvers list for GraphQL API in the portal." lightbox="media/configure-graphql-resolver/list-resolvers.png"::: ++ > [!TIP] + > The **Linked** column indicates whether or not the resolver is configured for a field that's currently in the GraphQL schema. If a resolver isn't linked, it can't be invoked. ++++## GraphQL context ++* The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request: + * `context.GraphQL` properties are set to the arguments (`Arguments`) and parent object (`Parent`) for the current resolver execution. + * The HTTP request context contains arguments that are passed in the GraphQL query as its body. + * The HTTP response context is the response from the independent HTTP call made by the resolver, not the context for the complete response for the gateway request. +The `context` variable that is passed through the request and response pipeline is augmented with the GraphQL context when used with a GraphQL resolver. ++### context.GraphQL.parent ++The `context.ParentResult` is set to the parent object for the current resolver execution. Consider the following partial schema: ++``` graphql +type Comment { + id: ID! + owner: string! + content: string! +} ++type Blog { + id: ID! + Title: string! + content: string! + comments: [Comment]! + comment(id: ID!): Comment +} ++type Query { + getBlog(): [Blog]! + getBlog(id: ID!): Blog +} +``` ++Also, consider a GraphQL query for all the information for a specific blog: ++``` graphql +query { + getBlog(id: 1) { + title + content + comments { + id + owner + content + } + } +} +``` ++If you set a resolver for the `comments` field in the `Blog` type, you'll want to understand which blog ID to use. You can get the ID of the blog using `context.GraphQL.Parent["id"]` as shown in the following resolver: ++``` xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>@($"https://data.contoso.com/api/blog/{context.GraphQL.Parent["id"]}") + }</set-url> + </http-request> +</http-data-source> +``` ++### context.GraphQL.Arguments ++The arguments for a parameterized GraphQL query are added to `context.GraphQL.Arguments`. For example, consider the following two queries: ++``` graphql +query($id: Int) { + getComment(id: $id) { + content + } +} ++query { + getComment(id: 2) { + content + } +} +``` ++These queries are two ways of calling the `getComment` resolver. GraphQL sends the following JSON payload: ++``` json +{ + "query": "query($id: Int) { getComment(id: $id) { content } }", + "variables": { "id": 2 } +} ++{ + "query": "query { getComment(id: 2) { content } }" +} +``` ++You can define the resolver as follows: ++``` xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>@($"https://data.contoso.com/api/comment/{context.GraphQL.Arguments["id"]}")</set-url> + </http-request> +</http-data-source> +``` ++## Next steps ++For more resolver examples, see: +++* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) ++* [Samples APIs for Azure API Management](https://github.com/Azure-Samples/api-management-sample-apis) |
api-management | Graphql Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md | Title: Import a GraphQL API to Azure API Management | Microsoft Docs + Title: Add a GraphQL API to Azure API Management | Microsoft Docs description: Learn how to add an existing GraphQL service as an API in Azure API Management using the Azure portal, Azure CLI, or Azure PowerShell. Manage the API and enable queries to pass through to the GraphQL endpoint. Previously updated : 10/27/2022 Last updated : 02/24/2023 -> * Learn more about the benefits of using GraphQL APIs. -> * Add a GraphQL API to your API Management instance. +> * Add a pass-through GraphQL API to your API Management instance. > * Test your GraphQL API.-> * Learn the limitations of your GraphQL API in API Management. If you want to import a GraphQL schema and set up field resolvers using REST or SOAP API endpoints, see [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md). If you want to import a GraphQL schema and set up field resolvers using REST or 1. In the dialog box, select **Full** and complete the required form fields. - :::image type="content" source="media/graphql-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API."::: + :::image type="content" source="media/graphql-api/create-from-graphql-endpoint.png" alt-text="Screenshot of fields for creating a GraphQL API."::: | Field | Description | |-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |- | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "Star Wars" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. | + | **GraphQL type** | Select **Pass-through GraphQL** to import from an existing GraphQL API endpoint. | + | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "swapi" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. | | **Upload schema** | Optionally select to browse and upload your schema file to replace the schema retrieved from the GraphQL endpoint (if available). | | **Description** | Add a description of your API. |- | **URL scheme** | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. | + | **URL scheme** | Make a selection based on your GraphQL endpoint. Select one of the options that includes a WebSocket scheme (**WS** or **WSS**) if your GraphQL API includes the subscription type. Default selection: *HTTP(S)*. | | **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. | | **Base URL** | Uneditable field displaying your API base URL | | **Tags** | Associate your GraphQL API with new or existing tags. | | **Products** | Associate your GraphQL API with a product to publish it. |- | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. | | **Version this API?** | Select to apply a versioning scheme to your GraphQL API. | 1. Select **Create**.-1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section. +1. After the API is created, browse or modify the schema on the **Design** tab. :::image type="content" source="media/graphql-api/explore-schema.png" alt-text="Screenshot of exploring the GraphQL schema in the portal."::: #### [Azure CLI](#tab/cli) After importing the API, if needed, you can update the settings by using the [Se [!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)] +### Test a subscription +If your GraphQL API supports a subscription, you can test it in the test console. ++1. Ensure that your API allows a WebSocket URL scheme (**WS** or **WSS**) that's appropriate for your API. You can enable this setting on the **Settings** tab. +1. Set up a subscription query in the query editor, and then select **Connect** to establish a WebSocket connection to the backend service. ++ :::image type="content" source="media/graphql-api/test-graphql-subscription.png" alt-text="Screenshot of a subscription query in the query editor."::: +1. Review connection details in the **Subscription** pane. ++ :::image type="content" source="media/graphql-api/graphql-websocket-connection.png" alt-text="Screenshot of Websocket connection in the portal."::: + +1. Subscribed events appear in the **Subscription** pane. The WebSocket connection is maintained until you disconnect it or you connect to a new WebSocket subscription. ++ :::image type="content" source="media/graphql-api/graphql-subscription-event.png" alt-text="Screenshot of GraphQL subscription events in the portal."::: ++## Secure your GraphQL API ++Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks. + [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps |
api-management | Graphql Apis Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md | + + Title: Support for GraphQL APIs - Azure API Management +description: Learn about GraphQL and how Azure API Management helps you manage GraphQL APIs. +++++ Last updated : 02/26/2023++++# Overview of GraphQL APIs in Azure API Management ++You can use API Management to manage GraphQL APIs - APIs based on the GraphQL query language. GraphQL provides a complete and understandable description of the data in an API, giving clients the power to efficiently retrieve exactly the data they need. [Learn more about GraphQL](https://graphql.org/learn/) ++API Management helps you import, manage, protect, test, publish, and monitor GraphQL APIs. You can choose one of two API models: +++|Pass-through GraphQL |Synthetic GraphQL | +||| +| ▪️ Pass-through API to existing GraphQL service endpoint<br><br/>▪️ Support for GraphQL queries, mutations, and subscriptions | ▪️ API based on a custom GraphQL schema<br></br>▪️ Support for GraphQL queries, mutations, and subscriptions<br/><br/>▪️ Configure custom resolvers, for example, to HTTP data sources<br/><br/>▪️ Develop GraphQL schemas and GraphQL-based clients while consuming data from legacy APIs | ++## Availability ++* GraphQL APIs are supported in all API Management service tiers +* Pass-through and synthetic GraphQL APIs currently aren't supported in a self-hosted gateway +* GraphQL subscription support in synthetic GraphQL APIs is currently in preview ++## What is GraphQL? ++GraphQL is an open-source, industry-standard query language for APIs. Unlike REST-style APIs designed around actions over resources, GraphQL APIs support a broader set of use cases and focus on data types, schemas, and queries. ++The GraphQL specification explicitly solves common issues experienced by client web apps that rely on REST APIs: ++* It can take a large number of requests to fulfill the data needs for a single page +* REST APIs often return more data than needed the page being rendered +* The client app needs to poll to get new information ++Using a GraphQL API, the client app can specify the data they need to render a page in a query document that is sent as a single request to a GraphQL service. A client app can also subscribe to data updates pushed from the GraphQL service in real time. ++## Schema and operation types ++In API Management, add a GraphQL API from a GraphQL schema, either retrieved from a backend GraphQL API endpoint or uploaded by you. A GraphQL schema describes: ++* Data object types and fields that clients can request from a GraphQL API +* Operation types allowed on the data, such as queries ++For example, a basic GraphQL schema for user data and a query for all users might look like: ++``` +type Query { + users: [User] +} ++type User { + id: String! + name: String! +} +``` ++API Management supports the following operation types in GraphQL schemas. For more information about these operation types, see the [GraphQL specification](https://spec.graphql.org/October2021/#sec-Subscription-Operation-Definitions). ++* **Query** - Fetches data, similar to a `GET` operation in REST +* **Mutation** - Modifies server-side data, similar to a `PUT` or `PATCH` operation in REST +* **Subscription** - Enables notifying subscribed clients in real time about changes to data on the GraphQL service ++ For example, when data is modified via a GraphQL mutation, subscribed clients could be automatically notified about the change. ++> [!IMPORTANT] +> API Management supports subscriptions implemented using the [graphql-ws](https://github.com/enisdenjo/graphql-ws) WebSocket protocol. Queries and mutations aren't supported over WebSocket. +> ++## Resolvers ++*Resolvers* take care of mapping the GraphQL schema to backend data, producing the data for each field in an object type. The data source could be an API, a database, or another service. For example, a resolver function would be responsible for returning data for the `users` query in the preceding example. ++In API Management, you can create a *custom resolver* to map a field in an object type to a backend data source. You configure resolvers for fields in synthetic GraphQL API schemas, but you can also configure them to override the default field resolvers used by pass-through GraphQL APIs. ++API Management currently supports HTTP-based resolvers to return the data for fields in a GraphQL schema. To use an HTTP-based resolver, configure a [`http-data-source`](http-data-source-policy.md) policy that transforms the API request (and optionally the response) into an HTTP request/response. ++For example, a resolver for the preceding `users` query might map to a `GET` operation in a backend REST API: ++```xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>https://myapi.contoso.com/api/users</set-url> + </http-request> +</http-data-source> +``` ++For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md). ++## Manage GraphQL APIs ++* Secure GraphQL APIs by applying both existing access control policies and a [GraphQL validation policy](validate-graphql-request-policy.md) to secure and protect against GraphQL-specific attacks. +* Explore the GraphQL schema and run test queries against the GraphQL APIs in the Azure and developer portals. +++## Next steps ++- [Import a GraphQL API](graphql-api.md) +- [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md) |
api-management | Graphql Schema Resolve Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md | Title: Import GraphQL schema and set up field resolvers | Microsoft Docs + Title: Add a synthetic GraphQL API to Azure API Management | Microsoft Docs -description: Import a GraphQL schema to API Management and configure a policy to resolve a GraphQL query using an HTTP-based data source. +description: Add a synthetic GraphQL API by importing a GraphQL schema to API Management and configuring field resolvers that use HTTP-based data sources. Previously updated : 05/17/2022 Last updated : 02/21/2023 -# Import a GraphQL schema and set up field resolvers +# Add a synthetic GraphQL API and set up field resolvers [!INCLUDE [api-management-graphql-intro.md](../../includes/api-management-graphql-intro.md)] - In this article, you'll: > [!div class="checklist"] > * Import a GraphQL schema to your API Management instance-> * Set up a resolver for a GraphQL query using an existing HTTP endpoints +> * Set up a resolver for a GraphQL query using an existing HTTP endpoint > * Test your GraphQL API If you want to expose an existing GraphQL endpoint as an API, see [Import a GraphQL API](graphql-api.md). If you want to expose an existing GraphQL endpoint as an API, see [Import a Grap ## Add a GraphQL schema 1. From the side navigation menu, under the **APIs** section, select **APIs**.-1. Under **Define a new API**, select the **Synthetic GraphQL** icon. +1. Under **Define a new API**, select the **GraphQL** icon. - :::image type="content" source="media/graphql-schema-resolve-api/import-graphql-api.png" alt-text="Screenshot of selecting Synthetic GraphQL icon from list of APIs."::: + :::image type="content" source="media/graphql-api/import-graphql-api.png" alt-text="Screenshot of selecting GraphQL icon from list of APIs."::: 1. In the dialog box, select **Full** and complete the required form fields. :::image type="content" source="media/graphql-schema-resolve-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API."::: - | Field | Description | + | Field | Description | |-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |- | **Fallback GraphQL endpoint** | For this scenario, optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. | - | **Upload schema file** | Select to browse and upload a valid GraphQL schema file with the `.graphql` extension. | - | Description | Add a description of your API. | - | URL scheme | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. | + | **GraphQL type** | Select **Synthetic GraphQL** to import from a GraphQL schema file. | + | **Fallback GraphQL endpoint** | Optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. | + | **Description** | Add a description of your API. | + | **URL scheme** | Make a selection based on your GraphQL endpoint. Select one of the options that includes a WebSocket scheme (**WS** or **WSS**) if your GraphQL API includes the subscription type. Default selection: *HTTP(S)*. | | **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. | | **Base URL** | Uneditable field displaying your API base URL | | **Tags** | Associate your GraphQL API with new or existing tags. | | **Products** | Associate your GraphQL API with a product to publish it. |- | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. | | **Version this API?** | Select to apply a versioning scheme to your GraphQL API. |+ 1. Select **Create**. -1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section. +1. After the API is created, browse or modify the schema on the **Design** tab. ## Configure resolver -Configure the [set-graphql-resolver](set-graphql-resolver-policy.md) policy to map a field in the schema to an existing HTTP endpoint. +Configure a resolver to map a field in the schema to an existing HTTP endpoint. ++<!-- Add link to resolver how-to article for details --> Suppose you imported the following basic GraphQL schema and wanted to set up a resolver for the *users* query. type User { ``` 1. From the side navigation menu, under the **APIs** section, select **APIs** > your GraphQL API.-1. On the **Design** tab of your GraphQL API, select **All operations**. -1. In the **Backend** processing section, select **+ Add policy**. -1. Configure the `set-graphql-resolver` policy to resolve the *users* query using an HTTP data source. +1. On the **Design** tab, review the schema for a field in an object type where you want to configure a resolver. + 1. Select a field, and then in the left margin, hover the pointer. + 1. Select **+ Add Resolver** ++ :::image type="content" source="media/graphql-schema-resolve-api/add-resolver.png" alt-text="Screenshot of adding a GraphQL resolver in the portal."::: ++1. On the **Create Resolver** page, update the **Name** property if you want to, optionally enter a **Description**, and confirm or update the **Type** and **Field** selections. - For example, the following `set-graphql-resolver` policy retrieves the *users* field by using a `GET` call on an existing HTTP data source. +1. In the **Resolver policy** editor, update the `<http-data-source>` element with child elements for your scenario. For example, the following resolver retrieves the *users* field by using a `GET` call on an existing HTTP data source. + ```xml- <set-graphql-resolver parent-type="Query" field="users"> <http-data-source> <http-request> <set-method>GET</set-method> <set-url>https://myapi.contoso.com/users</set-url> </http-request> </http-data-source>- </set-graphql-resolver> ```-1. To resolve data for other fields in the schema, repeat the preceding step. -1. Select **Save**. ++ :::image type="content" source="media/graphql-schema-resolve-api/configure-resolver-policy.png" alt-text="Screenshot of configuring resolver policy in the portal."::: +1. Select **Create**. +1. To resolve data for another field in the schema, repeat the preceding steps to create a resolver. [!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)] +## Secure your GraphQL API ++Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks. ++ [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps |
api-management | Http Data Source Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/http-data-source-policy.md | + + Title: Azure API Management policy reference - http-data-source | Microsoft Docs +description: Reference for the http-data-source resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples. +++++ Last updated : 02/23/2023++++# HTTP data source for a resolver ++The `http-data-source` resolver policy configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. The schema must be imported to API Management. +++## Policy statement ++```xml +<http-data-source> + <http-request> + <set-method>...set-method policy configuration...</set-method> + <set-url>URL</set-url> + <set-header>...set-header policy configuration...</set-header> + <set-body>...set-body policy configuration...</set-body> + <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate> + </http-request> + <http-response> + <xml-to-json>...xml-to-json policy configuration...</xml-to-json> + <find-and-replace>...find-and-replace policy configuration...</find-and-replace> + <set-body>...set-body policy configuration...</set-body> + <publish-event>...publish-event policy configuration...</publish-event> + </http-response> +</http-data-source> +``` ++## Elements ++|Name|Description|Required| +|-|--|--| +| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. Each child element can be specified at most once. | Yes | +| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each child element can be specified at most once. | No | ++### http-request elements ++> [!NOTE] +> Each child element may be specified at most once. Specify elements in the order listed. +++|Element|Description|Required| +|-|--|--| +| [set-method](set-method-policy.md) | Sets the method of the resolver's HTTP request. | Yes | +| set-url | Sets the URL of the resolver's HTTP request. | Yes | +| [set-header](set-header-policy.md) | Sets a header in the resolver's HTTP request. | No | +| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP request. | No | +| [authentication-certificate](authentication-certificate-policy.md) | Authenticates using a client certificate in the resolver's HTTP request. | No | ++### http-response elements ++> [!NOTE] +> Each child element may be specified at most once. Specify elements in the order listed. ++|Name|Description|Required| +|-|--|--| +| [xml-to-json](xml-to-json-policy.md) | Transforms the resolver's HTTP response from XML to JSON. | No | +| [find-and-replace](find-and-replace-policy.md) | Finds a substring in the resolver's HTTP response and replaces it with a different substring. | No | +| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP response. | No | +| [publish-event](publish-event-policy.md) | Publishes an event to one or more subscriptions specified in the GraphQL API schema. | No | ++## Usage ++- [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver +- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption ++### Usage notes ++* This policy is invoked only when resolving a single field in a matching GraphQL query, mutation, or subscription. ++## Examples ++### Resolver for GraphQL query ++The following example resolves a query by making an HTTP `GET` call to a backend data source. ++#### Example schema ++``` +type Query { + users: [User] +} ++type User { + id: String! + name: String! +} +``` ++#### Example policy ++```xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>https://data.contoso.com/get/users</set-url> + </http-request> +</http-data-source> +``` ++### Resolver for a GraqhQL query that returns a list, using a liquid template ++The following example uses a liquid template, supported for use in the [set-body](set-body-policy.md) policy, to return a list in the HTTP response to a query. It also renames the `username` field in the response from the REST API to `name` in the GraphQL response. ++#### Example schema ++``` +type Query { + users: [User] +} ++type User { + id: String! + name: String! +} +``` ++#### Example policy ++```xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>https://data.contoso.com/users</set-url> + </http-request> + <http-response> + <set-body template="liquid"> + [ + {% JSONArrayFor elem in body %} + { + "name": "{{elem.username}}" + } + {% endJSONArrayFor %} + ] + </set-body> + </http-response> +</http-data-source> +``` ++### Resolver for GraphQL mutation ++The following example resolves a mutation that inserts data by making a `POST` request to an HTTP data source. The policy expression in the `set-body` policy of the HTTP request modifies a `name` argument that is passed in the GraphQL query as its body. The body that is sent will look like the following JSON: ++``` json +{ + "name": "the-provided-name" +} +``` ++#### Example schema ++``` +type Query { + users: [User] +} ++type Mutation { + makeUser(name: String!): User +} ++type User { + id: String! + name: String! +} +``` ++#### Example policy ++```xml +<http-data-source> + <http-request> + <set-method>POST</set-method> + <set-url> https://data.contoso.com/user/create </set-url> + <set-header name="Content-Type" exists-action="override"> + <value>application/json</value> + </set-header> + <set-body>@{ + var args = context.Request.Body.As<JObject>(true)["arguments"]; + JObject jsonObject = new JObject(); + jsonObject.Add("name", args["name"]) + return jsonObject.ToString(); + }</set-body> + </http-request> +</http-data-source> +``` ++## Related policies ++* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) + |
api-management | Publish Event Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md | + + Title: Azure API Management policy reference - publish-event | Microsoft Docs +description: Reference for the publish-event policy available for use in Azure API Management. Provides policy usage, settings, and examples. +++++ Last updated : 02/23/2023++++# Publish event to GraphQL subscription ++The `publish-event` policy publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy using an [http-data-source](http-data-source-policy.md) GraphQL resolver for a related field in the schema for another operation type such as a mutation. At runtime, the event is published to connected GraphQL clients. Learn more about [GraphQL APIs in API Management](graphql-apis-overview.md). +++<!--Link to resolver configuration article --> ++## Policy statement ++```xml +<http-data-source + <http-request> + [...] + </http-request> + <http-response> + [...] + <publish-event> + <targets> + <graphql-subscription id="subscription field" /> + </targets> + </publish-event> + </http-response> +</http-data-source> +``` ++## Elements ++|Name|Description|Required| +|-|--|--| +| targets | One or more subscriptions in the GraphQL schema, specified in `target` subelements, to which the event is published. | Yes | +++## Usage ++- [**Policy sections:**](./api-management-howto-policies.md#sections) `http-response` element in `http-data-source` resolver +- [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver only +- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption ++### Usage notes ++* This policy is invoked only when a related GraphQL query or mutation is executed. ++## Example ++The following example policy definition is configured in a resolver for the `createUser` mutation. It publishes an event to the `onUserCreated` subscription. ++### Example schema ++``` +type User { + id: Int! + name: String! +} +++type Mutation { + createUser(id: Int!, name: String!): User +} ++type Subscription { + onUserCreated: User! +} +``` ++### Example policy ++```xml +<http-data-source> + <http-request> + <set-method>POST</set-method> + <set-url>https://contoso.com/api/user</set-url> + <set-body template="liquid">{ "id" : {{body.arguments.id}}, "name" : "{{body.arguments.name}}"}</set-body> + </http-request> + <http-response> + <publish-event> + <targets> + <graphql-subscription id="onUserCreated" /> + </targets> + </publish-event> + </http-response> +</http-data-source> +``` ++## Related policies ++* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) + |
api-management | Redirect Content Urls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/redirect-content-urls-policy.md | -The `redirect-content-urls` policy rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. Use in the outbound section to rewrite response body links to make them point to the gateway. Use in the inbound section for an opposite effect. +The `redirect-content-urls` policy rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. Use in the outbound section to rewrite response body links to the backend service to make them point to the gateway. Use in the inbound section for an opposite effect. > [!NOTE] > This policy does not change any header values such as `Location` headers. To change header values, use the [set-header](set-header-policy.md) policy. The `redirect-content-urls` policy rewrites (masks) links in the response body s * [API Management transformation policies](api-management-transformation-policies.md) |
api-management | Set Graphql Resolver Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-graphql-resolver-policy.md | Title: Azure API Management policy reference - set-graphql-resolver | Microsoft Docs -description: Reference for the set-graphql-resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples. +description: Reference for the set-graphql-resolver policy in Azure API Management. Provides policy usage, settings, and examples. This policy is retired. - Previously updated : 12/07/2022+ Last updated : 02/09/2023 -# Set GraphQL resolver +# Set GraphQL resolver (retired) -The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API). +> [!IMPORTANT] +> * The `set-graphql-resolver` policy is retired. Customers using the `set-graphql-resolver` policy must migrate to the [managed resolvers](configure-graphql-resolver.md) for GraphQL APIs, which provide enhanced functionality. +> * After you configure a managed resolver for a GraphQL field, the gateway skips the `set-graphql-resolver` policy in any policy definitions. You can't combine use of managed resolvers and the `set-graphql-resolver` policy in your API Management instance. +The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API). ## Policy statement The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in * This policy is invoked only when a matching GraphQL query is executed. * The policy resolves data for a single field. To resolve data for multiple fields, configure multiple occurrences of this policy in a policy definition. - ## GraphQL context * The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request: |
application-gateway | Configuration Listeners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md | Associate a frontend port. You can select an existing port or create a new one. > > **Outbound Rule**: (no specific requirement) -**Limitation**: The portal currently supports private and public listeners creation only for the Public clouds. - ## Protocol Choose HTTP or HTTPS: |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 02/21/2023 Last updated : 03/02/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." For more information, see [Tutorial: Deploy applications using GitOps with Flux The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. +### 1.6.4 (February 2023) ++Changes made for this version: ++- Disabled extension reconciler (which attempts to restore the Flux extension if it fails). This resolves a potential bug where, if the reconciler is unable to recover a failed Flux extension and `prune` is set to `true`, the extension and deployed objects may be deleted. + ### 1.6.3 (December 2022) Flux version: [Release v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0) |
azure-arc | Tutorial Akv Secrets Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md | Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster Previously updated : 10/12/2022 Last updated : 03/06/2023 -Benefits of the Azure Key Vault Secrets Provider extension include the following: +Capabilities of the Azure Key Vault Secrets Provider extension include: - Mounts secrets/keys/certs to pod using a CSI Inline volume - Supports pod portability with the SecretProviderClass CRD Benefits of the Azure Key Vault Secrets Provider extension include the following - Elastic Kubernetes Service - Tanzu Kubernetes Grid - Azure Red Hat OpenShift-- Ensure you have met the [general prerequisites for cluster extensions](extensions.md#prerequisites). You must use version 0.4.0 or newer of the `k8s-extension` Azure CLI extension.+- Ensure you've met the [general prerequisites for cluster extensions](extensions.md#prerequisites). You must use version 0.4.0 or newer of the `k8s-extension` Azure CLI extension. > [!TIP] > When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview. You can install the Azure Key Vault Secrets Provider extension on your connected [](media/tutorial-akv-secrets-provider/extension-install-new-resource.jpg) -1. Follow the prompts to deploy the extension. If needed, you can customize the installation by changing the default options on the **Configuration** tab. +1. Follow the prompts to deploy the extension. If needed, customize the installation by changing the default options on the **Configuration** tab. ### Azure CLI You can install the Azure Key Vault Secrets Provider extension on your connected export RESOURCE_GROUP=<resource-group-name> ``` -2. Install the Secrets Store CSI Driver and the Azure Key Vault Secrets Provider extension by running the following command: +2. Install the Secrets Store CSI Driver and the Azure Key Vault Secrets Provider extension by running the following command: ```azurecli-interactive az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider ``` -You should see output similar to the example below. Note that it may take several minutes before the secrets provider Helm chart is deployed to the cluster. +You should see output similar to this example. Note that it may take several minutes before the secrets provider Helm chart is deployed to the cluster. ```json { To confirm successful installation of the Azure Key Vault Secrets Provider exten az k8s-extension show --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --name akvsecretsprovider ``` -You should see output similar to the example below. +You should see output similar to this example. ```json { You should see output similar to the example below. Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your Key Vault must be globally unique. - Set the following environment variables: ```azurecli-interactive export AKV_RESOURCE_GROUP=<resource-group-name> export AZUREKEYVAULT_NAME=<AKV-name> export AZUREKEYVAULT_LOCATION=<AKV-location> ```+ Next, run the following command ```azurecli Before you move on to the next section, take note of the following properties: ## Provide identity to access Azure Key Vault -Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed through a service principal. Follow the steps below to provide an identity that can access your Key Vault. +Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed through a service principal. Follow these steps to provide an identity that can access your Key Vault. -1. Follow the steps [here](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) to create a service principal in Azure. Take note of the Client ID and Client Secret generated in this step. -1. Provide Azure Key Vault GET permission to the created service principal by following the steps [here](../../key-vault/general/assign-access-policy.md). -1. Use the client ID and Client Secret from step 1 to create a Kubernetes secret on the Arc connected cluster: +1. Follow the steps [to create a service principal in Azure](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Take note of the Client ID and Client Secret generated in this step. +1. Provide Azure Key Vault GET permission to the created service principal by [following these steps](../../key-vault/general/assign-access-policy.md). +1. Use the client ID and Client Secret from the first step to create a Kubernetes secret on the connected cluster: ```bash kubectl create secret generic secrets-store-creds --from-literal clientid="<client-id>" --from-literal clientsecret="<client-secret>" kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/DemoSecret ## Additional configuration options -The following configuration settings are available for the Azure Key Vault Secrets Provider extension: +The Azure Key Vault Secrets Provider extension supports [Helm chart configurations](https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/charts/csi-secrets-store-provider-azure/README.md#configuration). ++The following configuration settings are frequently used with the Azure Key Vault Secrets Provider extension: | Configuration Setting | Default | Description | | | -- | -- | | enableSecretRotation | false | Boolean type. If `true`, periodically updates the pod mount and Kubernetes Secret with the latest content from external secrets store |-| rotationPollInterval | 2m | Specifies the secret rotation poll interval duration if `enableSecretRotation` is `true`. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. | +| rotationPollInterval | 2m | If `enableSecretRotation` is `true`, specifies the secret rotation poll interval duration. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. | | syncSecret.enabled | false | Boolean input. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. If `true`, `SecretProviderClass` allows the `secretObjects` field to define the desired state of the synced Kubernetes Secret objects. | These settings can be specified when the extension is installed by using the `az k8s-extension create` command: These settings can be specified when the extension is installed by using the `az az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true ``` -You can also change the settings after installation by using the `az k8s-extension update` command: +You can also change these settings after installation by using the `az k8s-extension update` command: ```azurecli-interactive az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true ``` +You can use other configuration settings as needed for your deployment. For example, to change the kubelet root directory while creating a cluster, modify the az k8s-extension create command: ++```azurecli-interactive +az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings linux.kubeletRootDir=/path/to/kubelet secrets-store-csi-driver.enable secrets-store-csi-driver.linux.kubeletRootDir=/path/to/kubelet +``` ++ ## Uninstall the Azure Key Vault Secrets Provider extension To uninstall the extension, run the following command: To confirm that the extension instance has been deleted, run the following comma az k8s-extension list --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP ``` -If the extension was successfully removed, you won't see the the Azure Key Vault Secrets Provider extension listed in the output. If you don't have any other extensions installed on your cluster, you'll see an empty array. +If the extension was successfully removed, you won't see the Azure Key Vault Secrets Provider extension listed in the output. If you don't have any other extensions installed on your cluster, you'll see an empty array. ## Reconciliation and troubleshooting -The Azure Key Vault Secrets Provider extension is self-healing. If somebody tries to change or delete an extension component that was deployed when the extension was installed, that component will be reconciled to its original state. The only exceptions are for Custom Resource Definitions (CRDs). If CRDs are deleted, they won't be reconciled. To restore deleted CRDs, use the `az k8s-exstension create` command again with the existing extension instance name. +The Azure Key Vault Secrets Provider extension is self-healing. If somebody tries to change or delete an extension component that was deployed when the extension was installed, that component will be reconciled to its original state. The only exceptions are for Custom Resource Definitions (CRDs). If CRDs are deleted, they won't be reconciled. To restore deleted CRDs, use the `az k8s-extension create` command again with the existing extension instance name. For more information about resolving common issues, see the open source troubleshooting guides for [Azure Key Vault provider for Secrets Store CSI driver](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/) and [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html). |
azure-arc | Organize Inventory Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/organize-inventory-servers.md | + + Title: How to organize and inventory servers using hierarchies, tagging, and reporting +description: Learn how to organize and inventory servers using hierarchies, tagging, and reporting. Last updated : 03/03/2023++++# Organize and inventory servers with hierarchies, tagging, and reporting ++Azure Arc-enabled servers allows customers to develop an inventory across hybrid, multicloud, and edge workloads with the organizational and reporting capabilities native to Azure management. Azure Arc-enabled servers supports a breadth of platforms and distributions across Windows and Linux. Arc-enabled servers is also domain agnostic and integrates with Azure Lighthouse for multi-tenant customers. ++By projecting resources into the Azure management plane, Azure Arc empowers customers to leverage the organizational, tagging, and querying capabilities native to Azure. ++## Organize resources with built-in Azure hierarchies ++Azure provides four levels of management scope: ++- Management groups +- Subscriptions +- Resource groups +- Resources ++These levels of management help to manage access, policies, and compliance more efficiently. For example, if you apply a policy at one level, it propagates down to lower levels, helping improve governance posture. Moreover, these levels can be used to scope policies and security controls. For Arc-enabled servers, the different business units, applications, or workloads can be used to derive the hierarchical structure in Azure. Once resources have been onboarded to Azure Arc, you can seamlessly move an Arc-enabled server between different resource groups and scopes. +++## Tagging resources to capture additional, customizable metadata ++Tags are metadata elements you apply to your Azure resources. They are key-value pairs that help identify resources, based on settings relevant to your organization. For example, you can tag the environment for a resource as *Production* or *Testing*. Alternatively, you can use tagging to capture the ownership for a resource, separating the *Creator* or *Administrator*. Tags can also capture details on the resource itself, such as the physical datacenter, business unit, or workload. You can apply tags to your Azure resources, resource groups, and subscriptions. This extends to infrastructure outside of Azure as well, through Azure Arc. +++You can define tags in Azure portal through a simple point and click method. Tags can be defined when onboarding servers to Azure Arc-enabled servers or on a per-server basis. Alternatively, you can use Azure CLI, Azure PowerShell, ARM templates, or Azure policy for scalable tag deployments. Tags can be used to filter operations as well, such as the deployment of extensions or service attachments. This provides not only a more comprehensive inventory of your servers, but also operational flexibility and ease of management. +++## Reporting and querying with Azure Resource Graph (ARG) ++Numerous types of data are collected with Azure Arc-enabled servers as part of the instance metadata. This includes the platform, operating system, presence of SQL server, or AWS and GCP details. These attributes can be queried at scale using Azure Resource Graph. ++Azure Resource Graph is an Azure service designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment. These queries provide the ability to query resources with complex filtering, grouping, and sorting by resource properties. ++Results can be easily visualized and exported to other reporting solutions. Moreover there are dozens of built-in Azure Resource Graph queries capturing salient information across Azure VMs and Arc-enabled servers, such as their VM extensions, regional breakdown, and operating systems. ++## Additional resources ++* [What is Azure Resource Graph?](../../governance/resource-graph/overview.md) ++* [Azure Resource Graph sample queries for Azure Arc-enabled servers](resource-graph-samples.md) ++* [Use tags to organize your Azure resources and management hierarchy](/azure/azure-resource-manager/management/tag-resources?tabs=json) |
azure-cache-for-redis | Cache Redis Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-modules.md | Some popular modules are available for use in the Enterprise tier of Azure Cache |RedisTimeSeries | No | Yes | No | |RedisJSON | No | Yes | Yes | -Currently, `RediSearch` is the only module that can be used concurrently with active geo-replication. > [!NOTE] > Currently, you can't manually load any modules into Azure Cache for Redis. Manually updating modules version is also not possible.-> +++## Using modules with active geo-replication +Only the `RediSearch` and `RedisJSON` modules can be used concurrently with [active geo-replication](cache-how-to-active-geo-replication.md). ++Using these modules, you can implement searches across groups of caches that are synchronized in an active-active configuration. Also, you can search JSON structures in your active-active configuration. ## Client library support RedisBloom adds four probabilistic data structures to a Redis server: **bloom fi | **Data structure** | **Description** | **Example application**| | ||-|-| **Bloom and Cuckoo filters** | Tells you if an item is either (a) certainly not in a set or (b) potentially in a set. | Checking if an email has already been sent to a user| +| **Bloom and Cuckoo filters** | Tells you if an item is either (a) definitely not in a set or (b) potentially in a set. | Checking if an email has already been sent to a user| |**Count-min sketch** | Determines the frequency of events in a stream | Counting how many times an IoT device reported a temperature under 0 degrees Celsius. |-|**Top-k** | Finds the `k` most frequently seen items | Determine the most frequent words used in War and Peace. (for example, setting k = 50 will return the 50 most common words in the book) | +|**Top-k** | Finds the `k` most frequently seen items | Determine the most frequent words used in War and Peace. (for example, setting k = 50 returns the 50 most common words in the book) | **Bloom and Cuckoo** filters are similar to each other, but each has a unique set of advantages and disadvantages that are beyond the scope of this documentation. |
azure-monitor | Agent Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md | The Log Analytics agent for Linux is provided in a self-extracting and installab sudo sh ./omsagent-*.universal.x64.sh --upgrade -p https://<proxy user>:<proxy password>@<proxy address>:<proxy port> -w <workspace id> -s <shared key> ``` -1. To configure the Linux computer to connect to a Log Analytics workspace in Azure Government cloud, run the following command that provides the workspace ID and primary key copied earlier: +1. To configure the Linux computer to connect to a Log Analytics workspace in Azure Government or Azure China cloud, run the following command that provides the workspace ID and primary key copied earlier, substituting `opinsights.azure.us` or `opinsights.azure.cn` respectively for the domain name: ```- sudo sh ./omsagent-*.universal.x64.sh --upgrade -w <workspace id> -s <shared key> -d opinsights.azure.us + sudo sh ./omsagent-*.universal.x64.sh --upgrade -w <workspace id> -s <shared key> -d <domain name> ``` To install the agent packages and configure the agent to report to a specific Log Analytics workspace at a later time, run: |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | This section outlines supported scenarios. ### Supported platforms and frameworks -Supported platforms and frameworks are listed below. +This section lists all supported platforms and frameworks. #### Azure service integration (portal enablement, Azure Resource Manager deployments) * [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md) Several other community-supported Application Insights SDKs exist. However, Azur +## Frequently asked questions ++Review [frequently asked questions](../faq.yml). + ## Troubleshooting -### Frequently asked questions +Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/welcome-azure-monitor) for Application Insights. ++## Help and support -Review [frequently asked questions](../faq.yml). ### Microsoft Q&A questions forum Post general questions to the Microsoft Q&A [answers forum](/answers/topics/24223/azure-monitor.html). Leave product feedback for the engineering team on [UserVoice](https://feedback. ## Next steps - [Create a resource](create-workspace-resource.md)-- [Application Map](app-map.md)-- [Transaction search](diagnostic-search.md)+- [Auto-instrumentation overview](codeless-overview.md) +- [Overview dashboard](overview-dashboard.md) +- [Availability overview](availability-overview.md) +- [Application Map](app-map.md) |
azure-monitor | Opencensus Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md | Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor. Previously updated : 11/15/2022 Last updated : 03/04/2023 ms.devlang: python Azure Monitor supports distributed tracing, metric collection, and logging of Py Microsoft's supported solution for tracking and exporting data for your Python applications is through the [OpenCensus Python SDK](#introducing-opencensus-python-sdk) via the [Azure Monitor exporters](#instrument-with-opencensus-python-sdk-with-azure-monitor-exporters). -Any other telemetry SDKs for Python *are unsupported and are not recommended* by Microsoft to use as a telemetry solution. +Microsoft doesn't recommend using any other telemetry SDKs for Python as a telemetry solution because they're unsupported. OpenCensus is converging into [OpenTelemetry](https://opentelemetry.io/). We continue to recommend OpenCensus while OpenTelemetry gradually matures. OpenCensus maps the following exporters to the types of telemetry that you see i 1. First, let's generate some local log data. ```python+ import logging logger = logging.getLogger(__name__) - def valuePrompt(): - line = input("Enter a value: ") - logger.warning(line) - def main():- while True: - valuePrompt() + """Generate random log data.""" + for num in range(5): + logger.warning(f"Log Entry - {num}") if __name__ == "__main__": main() ``` -1. The code continuously asks for a value to be entered. A log entry is emitted for every entered value. +1. A log entry is emitted for each number in the range. ```output- Enter a value: 24 - 24 - Enter a value: 55 - 55 - Enter a value: 123 - 123 - Enter a value: 90 - 90 + Log Entry - 0 + Log Entry - 1 + Log Entry - 2 + Log Entry - 3 + Log Entry - 4 ``` -1. Entering values is helpful for demonstration purposes, but we want to emit the log data to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample: +1. We want to see this log data to Azure Monitor. You can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. You may also pass the connection_string directly into the `AzureLogHandler`, but connection strings shouldn't be added to version control. - ```python - import logging - from opencensus.ext.azure.log_exporter import AzureLogHandler + ```shell + APPLICATIONINSIGHTS_CONNECTION_STRING=<appinsights-connection-string> + ``` - logger = logging.getLogger(__name__) + We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample: - # TODO: replace the all-zero GUID with your instrumentation key. - logger.addHandler(AzureLogHandler( - connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000') - ) - # You can also instantiate the exporter directly if you have the environment variable - # `APPLICATIONINSIGHTS_CONNECTION_STRING` configured - # logger.addHandler(AzureLogHandler()) + ```python + import logging + from opencensus.ext.azure.log_exporter import AzureLogHandler - def valuePrompt(): - line = input("Enter a value: ") - logger.warning(line) + logger = logging.getLogger(__name__) + logger.addHandler(AzureLogHandler()) - def main(): - while True: - valuePrompt() + # Alternatively manually pass in the connection_string + # logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>) - if __name__ == "__main__": - main() - ``` + """Generate random log data.""" + for num in range(5): + logger.warning(f"Log Entry - {num}") + ``` 1. The exporter sends log data to Azure Monitor. You can find the data under `traces`. - In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you'll see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md). + In this context, `traces` isn't the same as `tracing`. Here, `traces` refers to the type of telemetry that you see in Azure Monitor when you utilize `AzureLogHandler`. But `tracing` refers to a concept in OpenCensus and relates to [distributed tracing](./distributed-tracing.md). > [!NOTE] > The root logger is configured with the level of `warning`. That means any logs that you send that have less severity are ignored, and in turn, won't be sent to Azure Monitor. For more information, see [Logging documentation](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel). OpenCensus maps the following exporters to the types of telemetry that you see i from opencensus.ext.azure.log_exporter import AzureLogHandler logger = logging.getLogger(__name__)- # TODO: replace the all-zero GUID with your instrumentation key. - logger.addHandler(AzureLogHandler( - connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000') - ) + logger.addHandler(AzureLogHandler()) + # Alternatively manually pass in the connection_string + # logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>) properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}} OpenCensus maps the following exporters to the types of telemetry that you see i #### Configure logging for Django applications -You can configure logging explicitly in your application code like the preceding for your Django applications, or you can specify it in Django's logging configuration. This code can go into whatever file you use for Django settings configuration. For information on how to configure Django settings, see [Django settings](https://docs.djangoproject.com/en/4.0/topics/settings/). For more information on how to configure logging, see [Django logging](https://docs.djangoproject.com/en/4.0/topics/logging/). +You can configure logging explicitly in your application code like the preceding for your Django applications, or you can specify it in Django's logging configuration. This code can go into whatever file you use for Django site's settings configuration, typically `settings.py`. ++For information on how to configure Django settings, see [Django settings](https://docs.djangoproject.com/en/4.0/topics/settings/). For more information on how to configure logging, see [Django logging](https://docs.djangoproject.com/en/4.0/topics/logging/). ```json LOGGING = { "handlers": { "azure": { "level": "DEBUG",- "class": "opencensus.ext.azure.log_exporter.AzureLogHandler", - "connection_string": "<your-application-insights-connection-string>", - }, + "class": "opencensus.ext.azure.log_exporter.AzureLogHandler", + "connection_string": "<appinsights-connection-string>", + }, "console": { "level": "DEBUG", "class": "logging.StreamHandler", "stream": sys.stdout,- }, + }, }, "loggers": { "logger_name": {"handlers": ["azure", "console"]}, LOGGING = { Be sure you use the logger with the same name as the one specified in your configuration. ```python+# views.py + import logging+from django.shortcuts import request logger = logging.getLogger("logger_name") logger.warning("this will be tracked")+ ``` #### Send exceptions import logging from opencensus.ext.azure.log_exporter import AzureLogHandler logger = logging.getLogger(__name__)-# TODO: replace the all-zero GUID with your instrumentation key. -logger.addHandler(AzureLogHandler( - connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000') -) +logger.addHandler(AzureLogHandler()) +# Alternatively, manually pass in the connection_string +# logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>) properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}} You can send `customEvent` telemetry in exactly the same way that you send `trac ```python import logging- from opencensus.ext.azure.log_exporter import AzureEventHandler logger = logging.getLogger(__name__)-logger.addHandler(AzureEventHandler(connection_string='InstrumentationKey=<your-instrumentation_key-here>')) +logger.addHandler(AzureLogHandler()) +# Alternatively manually pass in the connection_string +# logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>) + logger.setLevel(logging.INFO) logger.info('Hello, World!') ``` OpenCensus.stats supports four aggregation methods but provides partial support - **Count**: The count of the number of measurement points. The value is cumulative, can only increase, and resets to 0 on restart. - **Sum**: A sum up of the measurement points. The value is cumulative, can only increase, and resets to 0 on restart. - **LastValue**: Keeps the last recorded value and drops everything else.-- **Distribution**: Histogram distribution of the measurement points. *This method is not supported by the Azure exporter*.+- **Distribution**: The Azure exporter doesn't support the histogram distribution of the measurement points. ### Count aggregation example -1. First, let's generate some local metric data. We'll create a metric to track the number of times the user selects the **Enter** key. +1. First, let's generate some local metric data. We create a metric to track the number of times the user selects the **Enter** key. ```python+ from datetime import datetime from opencensus.stats import aggregation as aggregation_module from opencensus.stats import measure as measure_module OpenCensus.stats supports four aggregation methods but provides partial support mmap = stats_recorder.new_measurement_map() tmap = tag_map_module.TagMap() - def prompt(): - input("Press enter.") - mmap.measure_int_put(prompt_measure, 1) - mmap.record(tmap) - metrics = list(mmap.measure_to_view_map.get_metrics(datetime.utcnow())) - print(metrics[0].time_series[0].points[0]) - def main():- while True: - prompt() + for _ in range(4): + mmap.measure_int_put(prompt_measure, 1) + mmap.record(tmap) + metrics = list(mmap.measure_to_view_map.get_metrics(datetime.utcnow())) + print(metrics[0].time_series[0].points[0]) if __name__ == "__main__": main() ```-1. Running the code repeatedly prompts you to select **Enter**. A metric is created to track the number of times **Enter** is selected. With each entry, the value is incremented and the metric information appears in the console. The information includes the current value and the current time stamp when the metric was updated. ++1. Metrics are created to track many times. With each entry, the value is incremented and the metric information appears in the console. The information includes the current value and the current time stamp when the metric was updated. ```output- Press enter. Point(value=ValueLong(5), timestamp=2019-10-09 20:58:04.930426)- Press enter. - Point(value=ValueLong(6), timestamp=2019-10-09 20:58:06.570167) - Press enter. - Point(value=ValueLong(7), timestamp=2019-10-09 20:58:07.138614) + Point(value=ValueLong(6), timestamp=2019-10-09 20:58:05.170167) + Point(value=ValueLong(7), timestamp=2019-10-09 20:58:05.438614) + Point(value=ValueLong(7), timestamp=2019-10-09 20:58:05.834216) ``` 1. Entering values is helpful for demonstration purposes, but we want to emit the metric data to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample: OpenCensus.stats supports four aggregation methods but provides partial support mmap = stats_recorder.new_measurement_map() tmap = tag_map_module.TagMap() - # TODO: replace the all-zero GUID with your instrumentation key. - exporter = metrics_exporter.new_metrics_exporter( - connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000', - export_interval=60, # Application Insights backend assumes aggregation on a 60s interval - ) - # You can also instantiate the exporter directly if you have the environment variable - # `APPLICATIONINSIGHTS_CONNECTION_STRING` configured - # exporter = metrics_exporter.new_metrics_exporter() + exporter = metrics_exporter.new_metrics_exporter() + # Alternatively manually pass in the connection_string + # exporter = metrics_exporter.new_metrics_exporter(connection_string='<appinsights-connection-string>') view_manager.register_exporter(exporter) - def prompt(): - input("Press enter.") - mmap.measure_int_put(prompt_measure, 1) - mmap.record(tmap) - metrics = list(mmap.measure_to_view_map.get_metrics(datetime.utcnow())) - print(metrics[0].time_series[0].points[0]) - def main():- while True: - prompt() + for _ in range(10): + input("Press enter.") + mmap.measure_int_put(prompt_measure, 1) + mmap.record(tmap) + metrics = list(mmap.measure_to_view_map.get_metrics(datetime.utcnow())) + print(metrics[0].time_series[0].points[0]) if __name__ == "__main__": main() ``` -1. The exporter sends metric data to Azure Monitor at a fixed interval. You must set this value to 60s as Application Insights backend assumes aggregation of metrics points on a 60s time interval. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The data is cumulative, can only increase, and resets to 0 on restart. +1. The exporter sends metric data to Azure Monitor at a fixed interval. You must set this value to 60 seconds as Application Insights backend assumes aggregation of metrics points on a 60-second time interval. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The data is cumulative, can only increase, and resets to 0 on restart. You can find the data under `customMetrics`, but the `customMetrics` properties `valueCount`, `valueSum`, `valueMin`, `valueMax`, and `valueStdDev` aren't effectively used. The OpenCensus Python SDK allows you to add custom dimensions to your metrics te ... ``` -1. Under the `customMetrics` table, all metrics records emitted by using `prompt_view` will have custom dimensions `{"url":"http://example.com"}`. +1. Under the `customMetrics` table, all metric records emitted by using `prompt_view` have custom dimensions `{"url":"http://example.com"}`. 1. To produce tags with different values by using the same keys, create new tag maps for them. By default, the metrics exporter sends a set of performance counters to Azure Mo ... exporter = metrics_exporter.new_metrics_exporter( enable_standard_metrics=False,- connection_string='InstrumentationKey=<your-instrumentation-key-here>') + ) ... ``` For information on how to modify tracked telemetry before it's sent to Azure Mon tracer = Tracer(sampler=ProbabilitySampler(1.0)) - def valuePrompt(): + def main(): with tracer.span(name="test") as span:- line = input("Enter a value: ") - print(line) + for value in range(5): + print(value) - def main(): - while True: - valuePrompt() if __name__ == "__main__": main() ``` -1. Running the code repeatedly prompts you to enter a value. With each entry, the value is printed to the shell. The OpenCensus Python module generates a corresponding piece of `SpanData`. The OpenCensus project defines a [trace as a tree of spans](https://opencensus.io/core-concepts/tracing/). +1. With each entry, the value is printed to the shell. The OpenCensus Python module generates a corresponding piece of `SpanData`. The OpenCensus project defines a [trace as a tree of spans](https://opencensus.io/core-concepts/tracing/). ```output- Enter a value: 4 - 4 + 0 [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='15ac5123ac1f6847', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:22.805429Z', end_time='2019-06-27T18:21:44.933405Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)]- Enter a value: 25 - 25 + 1 [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='2e512f846ba342de', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:44.933405Z', end_time='2019-06-27T18:21:46.156787Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)]- Enter a value: 100 - 100 + 2 [SpanData(name='test', context=SpanContext(trace_id=8aa41bc469f1a705aed1bdb20c342603, span_id=None, trace_options=TraceOptions(enabled=True), tracestate=None), span_id='f3f9f9ee6db4740a', parent_span_id=None, attributes=BoundedDict({}, maxlen=32), start_time='2019-06-27T18:21:46.157732Z', end_time='2019-06-27T18:21:47.269583Z', child_span_count=0, stack_trace=None, annotations=BoundedList([], maxlen=32), message_events=BoundedList([], maxlen=128), links=BoundedList([], maxlen=32), status=None, same_process_as_parent_span=None, span_kind=0)] ``` -1. Entering values is helpful for demonstration purposes, but we want to emit `SpanData` to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample: +1. Viewing the output is helpful for demonstration purposes, but we want to emit `SpanData` to Azure Monitor. Pass your connection string directly into the exporter. Or you can specify it in an environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`. We recommend using the connection string to instantiate the exporters that are used to send telemetry to Application Insights. Modify your code from the previous step based on the following code sample: ```python from opencensus.ext.azure.trace_exporter import AzureExporter from opencensus.trace.samplers import ProbabilitySampler from opencensus.trace.tracer import Tracer - # TODO: replace the all-zero GUID with your instrumentation key. tracer = Tracer(- exporter=AzureExporter( - connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000'), + exporter=AzureExporter(), sampler=ProbabilitySampler(1.0), )- # You can also instantiate the exporter directly if you have the environment variable - # `APPLICATIONINSIGHTS_CONNECTION_STRING` configured - # exporter = AzureExporter() -- def valuePrompt(): - with tracer.span(name="test") as span: - line = input("Enter a value: ") - print(line) -+ # Alternatively manually pass in the connection_string + # exporter = AzureExporter( + # connection_string='<appinsights-connection-string>', + # ... + # ) + def main():- while True: - valuePrompt() + with tracer.span(name="test") as span: + for value in range(5): + print(value) if __name__ == "__main__": main() ``` -1. Now when you run the Python script, you should still be prompted to enter values, but only the value is being printed in the shell. The created `SpanData` is sent to Azure Monitor. You can find the emitted span data under `dependencies`. +1. Now when you run the Python script, only the value is being printed in the shell. The created `SpanData` is sent to Azure Monitor. You can find the emitted span data under `dependencies`. For more information about outgoing requests, see OpenCensus Python [dependencies](./opencensus-python-dependency.md). For more information on incoming requests, see OpenCensus Python [requests](./opencensus-python-request.md). Each exporter accepts the same arguments for configuration, passed through the c `connection_string`| The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.| `credential`| Credential class used by Azure Active Directory authentication. See the "Authentication" section that follows.| `enable_standard_metrics`| Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.|-`export_interval`| Used to specify the frequency in seconds of exporting. Defaults to `15s`. For metrics you MUST set this to 60s or else your metric aggregations will not make sense in the metrics explorer.| +`export_interval`| Used to specify the frequency in seconds of exporting. Defaults to `15s`. For metrics, you MUST set it to 60 seconds or else your metric aggregations don't make sense in the metrics explorer.| `grace_period`| Used to specify the timeout for shutdown of exporters in seconds. Defaults to `5s`.| `instrumentation_key`| The instrumentation key used to connect to your Azure Monitor resource.| `logging_sampling_rate`| Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to `1.0`.| |
azure-monitor | Tutorial Logs Ingestion Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md | Start by registering an Azure Active Directory application to authenticate again :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" alt-text="Screenshot that shows the secret value for the new app."::: ## Create a data collection endpoint-A [data collection endpoint](../essentials/data-collection-endpoint-overview.md) is required to accept the data from the script. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the Log Analytics workspace where the data will be sent. +A [data collection endpoint](../essentials/data-collection-endpoint-overview.md) is required to accept the data from the script. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the VM being associated, but it does not need to be in the same region as the Log Analytics workspace where the data will be sent or the data collection rule being used. 1. To create a new DCE, go to the **Monitor** menu in the Azure portal. Select **Data Collection Endpoints** and then select **Create**. |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | -This article lists significant changes to Azure Monitor documentation. +This article lists significant changes to Azure Monitor documentation. ++## February 2023 + +|Subservice| Article | Description | +|||| +Agents|[Azure Monitor agent extension versions](agents/azure-monitor-agent-extension-versions.md)|Added release notes for the Azure Monitor Agent Linux 1.25 release.| +Agents|[Migrate to Azure Monitor Agent from Log Analytics agent](agents/azure-monitor-agent-migration.md)|Updated guidance for migrating from Log Analytics Agent to Azure Monitor Agent.| +Alerts|[Manage your alert rules](alerts/alerts-manage-alert-rules.md)|Included limitation and workaround for resource health alerts. If you apply a target resource type scope filter to the alerts rules page, the alerts rules list doesnΓÇÖt include resource health alert rules.| +Alerts|[Customize alert notifications by using Logic Apps](alerts/alerts-logic-apps.md)|Added instructions for additional customizations that you can include when using Logic Apps to create alert notifications. You can extracting information about the affected resource from resource's tags, and then include the resource tags in the alert payload and use the information in your logical expressions used for creating the notifications.| +Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups-create-resource-manager-template.md)|Combined two articles about creating action groups into one article.| +Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Clarified that you can't pass security certificates in a webhook action in action groups.| +Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|Add information about adding custom properties to the alert payload when you use action groups.| +Alerts|[Manage your alert instances](alerts/alerts-manage-alert-instances.md)|Removed option for managing alert instances using the CLI.| +Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|The continuous export deprecation notice has been added to this article for more visibility. It's recommended to migrate to workspace-based Application Insights resources as soon as possible to take advantage of new features.| +Application-Insights|[Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)|Client-side JavaScript SDK extensions have been consolidated into two new articles called "Framework extensions" and "Feature Extensions". We've additionally created new stand-alone Upgrade and Troubleshooting articles.| +Application-Insights|[Create an Application Insights resource](app/create-new-resource.md)|Classic workspace documentation has been moved to the Legacy and Retired Features section of our table of contents and we've made both the feature retirement and upgrade path clearer. It's recommended to migrate to workspace-based Application Insights resources as soon as possible to take advantage of new features.| +Application-Insights|[Monitor Azure Functions with Azure Monitor Application Insights](app/monitor-functions.md)|We've overhauled our documentation on Azure Functions integration with Application Insights.| +Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](app/opentelemetry-enable.md)|Java OpenTelemetry examples have been updated.| +Application-Insights|[Application Monitoring for Azure App Service and Java](app/azure-web-apps-java.md)|We updated and separated out the instructions to manually deploy the latest Application Insights Java version.| +Containers|[Enable Container insights for Azure Kubernetes Service (AKS) cluster](containers/container-insights-enable-aks.md)|Added section for enabling private link without managed identity authentication.| +Containers|[Syslog collection with Container Insights (preview)](containers/container-insights-syslog.md)|Added use of ARM templates for enabling syslog collection| +Essentials|[Data collection transformations in Azure Monitor](essentials/data-collection-transformations.md)|Added section and sample for using transformations to send to multiple destinations.| +Essentials|[Custom metrics in Azure Monitor (preview)](essentials/metrics-custom-overview.md)|Added reference to the limit of 64 KB on the combined length of all custom metrics names| +Essentials|[Azure monitoring REST API walkthrough](essentials/rest-api-walkthrough.md)|Refresh REST API walkthrough| +Essentials|[Collect Prometheus metrics from AKS cluster (preview)](essentials/prometheus-metrics-enable.md)|Added Enabling Prometheus metric collection using Azure policy and Bicep| +Essentials|[Send Prometheus metrics to multiple Azure Monitor workspaces (preview)](essentials/prometheus-metrics-multiple-workspaces.md)|Updated sending metrics to multiple Azure Monitor workspaces| +General|[Analyzing and visualize data](best-practices-analysis.md)|Revised the article about analyzing and visualizing monitoring data to provide a comparison of the different visualization tools and guide customers when they would choose each tool for their implementation. | +Logs|[Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates)](logs/tutorial-logs-ingestion-api.md)|Minor fixes and updated sample data.| +Logs|[Analyze usage in a Log Analytics workspace](logs/analyze-usage.md)|Added query for data that has the IsBillable indicator set incorrectly, which could result in incorrect billing.| +Logs|[Add or delete tables and columns in Azure Monitor Logs](logs/create-custom-table.md)|Added custom column naming limitations.| +Logs|[Enhance data and service resilience in Azure Monitor Logs with availability zones](logs/availability-zones.md)|Clarified availability zone support for data resilience and service resilience and added new supported regions.| +Logs|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|New article that explains how to monitor the service and resource health of a Log Analytics workspace.| +Logs|[Feature extensions for Application Insights JavaScript SDK (Click Analytics)](app/javascript-click-analytics-plugin.md)|You can now launch Power BI and create a dataset and report connected to a Log Analytics query with one click.| +Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added new tables to the list of tables that support Basic logs.| +Logs|[Manage tables in a Log Analytics workspace]()|Refreshed all Log Analytics workspace images with the new left-hand menu (ToC).| +Security-Fundamentals|[Monitoring App Service](../../articles/app-service/monitor-app-service.md)|Revised the Azure Monitor Overview to improve usability. The article has been cleaned up and streamlined, and better reflects the product architecture as well as the customer experience. | +Snapshot-Debugger|[host.json reference for Azure Functions 2.x and later](../../articles/azure-functions/functions-host-json.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Snapshot-Debugger|[Configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger](profiler/profiler-bring-your-own-storage.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Snapshot-Debugger|[Release notes for Microsoft.ApplicationInsights.SnapshotCollector](snapshot-debugger/snapshot-collector-release-notes.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Snapshot-Debugger|[Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](snapshot-debugger/snapshot-debugger-function-app.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Snapshot-Debugger|[ Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](snapshot-debugger/snapshot-debugger-troubleshoot.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines](snapshot-debugger/snapshot-debugger-vm.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Snapshot-Debugger|[Debug snapshots on exceptions in .NET apps](snapshot-debugger/snapshot-debugger.md)|Removing the TSG from the AzMon TOC and adding to the support TOC| +Virtual-Machines|[Monitor virtual machines with Azure Monitor: Analyze monitoring data](vm/monitor-virtual-machine-analyze.md)|New article| +Visualizations|[Use JSONPath to transform JSON data in workbooks](visualize/workbooks-jsonpath.md)|Added information about using JSONPath to convert data types in Azure Workbooks.| +Containers|[Configure Container insights cost optimization data collection rules]()|New article on preview of cost optimization settings.| ++ ## January 2023 |Subservice| Article | Description | Application-Insights|[Live Metrics: Monitor and diagnose with 1-second latency]( Application-Insights|[Application Insights for Azure VMs and Virtual Machine Scale Sets](app/azure-vm-vmss-apps.md)|Easily monitor your IIS-hosted .NET Framework and .NET Core applications running on Azure VMs and Virtual Machine Scale Sets using a new App Insights Extension.| Application-Insights|[Sampling in Application Insights](app/sampling.md)|We've added embedded links to assist with looking up type definitions. (Dependency, Event, Exception, PageView, Request, Trace)| Application-Insights|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|Instructions are now available on how to set the http proxy using an environment variable, which overrides the JSON configuration. We've also provided a sample to configure connection string at runtime.|-Application-Insights|[Application Insights for Java 2.x](app/deprecated-java-2x.md)|The Java 2.x retirement notice is available at https://azure.microsoft.com/updates/application-insights-java-2x-retirement .| +Application-Insights|[Application Insights for Java 2.x](app/deprecated-java-2x.md)|The Java 2.x retirement notice is available at https://azure.microsoft.com/updates/application-insights-java-2x-retirement.| Autoscale|[Diagnostic settings in Autoscale](autoscale/autoscale-diagnostics.md)|Updated and expanded content| Autoscale|[Overview of common autoscale patterns](autoscale/autoscale-common-scale-patterns.md)|Clarification of weekend profiles| Autoscale|[Autoscale with multiple profiles](autoscale/autoscale-multiprofile.md)|Added clarifications for profile end times| Logs|[Send custom metrics for an Azure resource to the Azure Monitor metric stor Logs|[Migrate from Splunk to Azure Monitor Logs](logs/migrate-splunk-to-azure-monitor-logs.md)|New article that explains how to migrate your Splunk Observability deployment to Azure Monitor Logs for logging and log data analysis.| Logs|[Manage access to Log Analytics workspaces](logs/manage-access.md)|Added permissions required to run a search job and restore archived data.| Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added information about how to modify a table schema using the API.|-Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Per customer feedback, added new note that Consumption plan is not supported| +Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Per customer feedback, added new note that Consumption plan isn't supported| Virtual-Machines|[Collect IIS logs with Azure Monitor Agent](agents/data-collection-iis.md)|Added sample log queries.| Virtual-Machines|[Collect text logs with Azure Monitor Agent](agents/data-collection-text-log.md)|Added sample log queries.| Virtual-Machines|[Monitor virtual machines with Azure Monitor: Deploy agent](vm/monitor-virtual-machine-agent.md)|Rewritten for Azure Monitor agent.| Virtual-Machines|[Monitor Azure virtual machines](../../articles/virtual-machine |Subservice| Article | Description | |||| General|[Azure Monitor for existing Operations Manager customers](azure-monitor-operations-manager.md)|Updated for AMA and SCOM managed instance.|-Application-Insights|[Create an Application Insights resource](app/create-new-resource.md)|Classic Application Insights resources are deprecated and support will end on February 29th, 2024. Migrate to workspace-based resources to take advantage of new capabilities.| +Application-Insights|[Create an Application Insights resource](app/create-new-resource.md)|Classic Application Insights resources are deprecated and support will end on February 29, 2024. Migrate to workspace-based resources to take advantage of new capabilities.| Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview)](app/opentelemetry-enable.md)|Updated Node.js sample code for JavaScript and TypeScript.| Application-Insights|[System performance counters in Application Insights](app/performance-counters.md)|Updated code samples for .NET 6/7.| Application-Insights|[Sampling in Application Insights](app/sampling.md)|Updated code samples for .NET 6/7.| |
azure-resource-manager | Child Resource Name Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/child-resource-name-type.md | The **full name** of the child resource uses the pattern: {parent-resource-name}/{child-resource-name} ``` -If you have more two levels in the hierarchy, keep repeating parent names: +If you have more than two levels in the hierarchy, keep repeating parent names: ```bicep {parent-resource-name}/{child-level1-resource-name}/{child-level2-resource-name} |
azure-resource-manager | Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/data-types.md | Title: Data types in Bicep description: Describes the data types that are available in Bicep Previously updated : 12/12/2022 Last updated : 01/10/2023 # Data types in Bicep -This article describes the data types supported in [Bicep](./overview.md). +This article describes the data types supported in [Bicep](./overview.md). [User-defined data types](./user-defined-data-types.md) are currently in preview. ## Supported types |
azure-resource-manager | User Defined Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md | + + Title: User-defined types in Bicep +description: Describes how to define and use user-defined data types in Bicep. + Last updated : 01/09/2023+++# User-defined data types in Bicep (Preview) ++Learn how to use user-defined data types in Bicep. ++[Bicep version 1.2 or newer](./install.md) is required to use this feature. ++## Enable the preview feature ++To enable this preview, modify your project's [bicepconfig.json](./bicep-config.md) file to include the following JSON: ++```json +{ + "experimentalFeaturesEnabled": { + "userDefinedTypes": true + } +} +``` ++## User-defined data type syntax ++You can use the `type` statement to define user-defined data types. In addition, you can also use type expressions in some places to define custom types. ++```bicep +Type <userDefinedDataTypeName> = <typeExpression> +``` ++The valid type expressions include: ++- Symbolic references are identifiers that refer to an *ambient* type (like `string` or `int`) or a user-defined type symbol declared in a `type` statement: ++ ```bicep + // Bicep data type reference + type myStringType = string ++ // user-defined type reference + type myOtherStringType = myStringType + ``` ++- Primitive literals, including strings, integers, and booleans, are valid type expressions. For example: ++ ```bicep + // a string type with three allowed values. + type myStringLiteralType = 'bicep' | 'arm' | 'azure' ++ // an integer type with one allowed value + type myIntLiteralType = 10 ++ // an boolean type with one allowed value + type myBoolLiteralType = true + ``` ++- Array types can be declared by suffixing `[]` to any valid type expression: ++ ```bicep + // A string type array + type myStrStringsType1 = string[] + // A string type array with three allowed values + type myStrStringsType2 = ('a' | 'b' | 'c')[] ++ type myIntArrayOfArraysType = int[][] ++ // A mixed-type array with four allowed values + type myMixedTypeArrayType = ('fizz' | 42 | {an: 'object'} | null)[] + ``` ++- Object types contain zero or more properties between curly brackets: ++ ```bicep + type storageAccountConfigType = { + name: string + sku: string + } + ``` ++ Each property in an object consists of key and value. The key and value are separated by a colon `:`. The key may be any string (values that would not be a valid identifier must be enclosed in quotes), and the value may be any type syntax expression. ++ Properties are required unless they have an optionality marker `?` between the property name and the colon. For example, the `sku` property in the following example is optional: ++ ```bicep + type storageAccountConfigType = { + name: string + sku?: string + } + ``` ++ **Recursion** ++ Object types may use direct or indirect recursion so long as at least leg of the path to the recursion point is optional. For example, the `myObjectType` definition in the following example is valid because the directly recursive `recursiveProp` property is optional: ++ ```bicep + type myObjectType = { + stringProp: string + recursiveProp?: myObjectType + } + ``` ++ But the following would not be valid because none of `level1`, `level2`, `level3`, `level4`, or `level5` is optional. ++ ```bicep + type invalidRecursiveObjectType = { + level1: { + level2: { + level3: { + level4: { + level5: invalidRecursiveObject + } + } + } + } + } + ``` ++- [Bicep unary operators](./operators.md) can be used with integer and boolean literals or references to integer or boolean literal-typed symbols: ++ ```bicep + type negativeIntLiteral = -10 + type negatedIntReference = -negativeIntLiteral ++ type negatedBoolLiteral = !true + type negatedBoolReference = !negatedBoolLiteral + ``` ++- Unions may include any number of literal-typed expressions. Union types are translated into the [allowed-value constraint](./parameters.md#decorators) in Bicep, so only literals are permitted as members. ++ ```bicep + type oneOfSeveralObjects = {foo: 'bar'} | {fizz: 'buzz'} | {snap: 'crackle'} + type mixedTypeArray = ('fizz' | 42 | {an: 'object'} | null)[] + ``` ++In addition to be used in the `type` statement, type expressions can also be used in these places for creating user-defined date types: ++- As the type clause of a `param` statement. For example: ++ ```bicep + param storageAccountConfig { + name: string + sku: string + } + ``` ++- Following the `:` in an object type property. For example: ++ ```bicep + param storageAccountConfig { + name: string + properties: { + sku: string + } + } = { + name: 'store$(uniqueString(resourceGroup().id)))' + properties: { + sku: 'Standard_LRS' + } + } + ``` ++- Preceding the `[]` in an array type expression. For example: ++ ```bicep + param mixedTypeArray ('fizz' | 42 | {an: 'object'} | null)[] + ``` ++## An example ++A typical Bicep file to create a storage account looks like: ++```bicep +param location string = resourceGroup().location +param storageAccountName string ++@allowed([ + 'Standard_LRS' + 'Standard_GRS' +]) +param storageAccountSKU string = 'Standard_LRS' ++resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { + name: storageAccountName + location: location + sku: { + name: storageAccountSKU + } + kind: 'StorageV2' +} +``` ++By using user-defined data types, it can look like: ++```bicep +param location string = resourceGroup().location ++type storageAccountSkuType = 'Standard_LRS' | 'Standard_GRS' ++type storageAccountConfigType = { + name: string + sku: storageAccountSkuType +} ++param storageAccountConfig storageAccountConfigType ++resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { + name: storageAccountConfig.name + location: location + sku: { + name: storageAccountConfig.sku + } + kind: 'StorageV2' +} +``` ++## Next steps ++- For a list of the Bicep date types, see [Data types](./data-types.md). |
azure-resource-manager | Networking Move Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md | If you want to move networking resources to a new region, see [Tutorial: Move Az > [!NOTE] > Any resource, including a VPN Gateway, that is associated with a public IP Standard SKU address can't be moved across subscriptions. For virtual machines, you can [disassociate the public IP address](../../../virtual-network/ip-services/remove-public-ip-address-vm.md) before moving across subscriptions. -When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group. +When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). The virtual network assigned to the AKS instance can also be moved, and local network gateways can be in a different resource group. ++> [!WARNING] +> Please refrain from moving the virtual network for an AKS cluster. The AKS cluster will stop working if its virtual network is moved. To move a virtual machine with a network interface card to a new subscription, you must move all dependent resources. Move the virtual network for the network interface card, all other network interface cards for the virtual network, and the VPN gateways. |
azure-video-indexer | Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md | Title: Azure Video Indexer insights overview description: This article gives a brief overview of Azure Video Indexer insights. Previously updated : 10/19/2022 Last updated : 03/03/2023 -Here some common insights: --|**Insight**|**Description**| -||| -|Audio effects|For more information, see [Audio effects detection](/legal/azure-video-indexer/audio-effects-detection-transparency-note?context=/azure/azure-video-indexer/context/context).| -|Scenes, shots, and keyframes|Selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). Scenes, shots, and keyframes are merged into one insight for easier consumption and navigation. When you select the desired scene you can see what shots and keyframes it consists of. For more information, see [Scenes, shots, and keyframes](scenes-shots-keyframes.md).| -|Emotions|Identifies emotions based on speech and audio cues.| -|Faces|For more information, see [Faces detection](/legal/azure-video-indexer/face-detection-transparency-note?context=/azure/azure-video-indexer/context/context).| -|Textual logo detection|Matches a specific predefined text using Azure Video Indexer OCR. For example, if a user created a textual logo: "Microsoft", different appearances of the word *Microsoft* will be detected as the "Microsoft" logo. For more information, see [Detect textual logo](detect-textual-logo.md). -|Keywords|For more information, see [Keywords extraction](/legal/azure-video-indexer/keywords-transparency-note?context=/azure/azure-video-indexer/context/context).| -|Labels|For more information, see [Labels identification](/legal/azure-video-indexer/labels-identification-transparency-note?context=/azure/azure-video-indexer/context/context)| -|Named entities|For more information, see [Named entities](/legal/azure-video-indexer/named-entities-transparency-note?context=/azure/azure-video-indexer/context/context).| -|People|For more information, see [Observed people tracking & matched faces](/legal/azure-video-indexer/observed-matched-people-transparency-note?context=/azure/azure-video-indexer/context/context).| -|Topics|For more information, see [Topics inference](/legal/azure-video-indexer/topics-inference-transparency-note?context=/azure/azure-video-indexer/context/context).| -|OCR|For more information, see [OCR](/legal/azure-video-indexer/ocr-transparency-note?context=/azure/azure-video-indexer/context/context).| -|Sentiments|Sentiments are aggregated by their `sentimentType` field (`Positive`, `Neutral`, or `Negative`).| -|Speakers|Maps and understands which speaker spoke which words and when. Sixteen speakers can be detected in a single audio-file.| -|Transcript|For more information, see [Transcription, translation, language](/legal/azure-video-indexer/transcription-translation-lid-transparency-note?context=/azure/azure-video-indexer/context/context).| --For information about features and other insights, see [Azure Video Indexer insights](video-indexer-overview.md#video-models). +For information about features and other insights, see: ++- [Azure Video Indexer overview](video-indexer-overview.md) +- [Transparency note](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context) Once you [set up](video-indexer-get-started.md) an Azure Video Indexer account (see [account types](accounts-overview.md)) and [upload a video](upload-index-videos.md), you can view insights as described below. |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | To attach an Azure NetApp Files volume to your private cloud using Azure CLI, fo ## Service level change for Azure NetApp Files datastore -Based on the performance requirements of the datastore, you can change the service level of the Azure NetApp Files volume used for the datastore by following the instructions to [dynamically change the service level of a volume for Azure NetApp Files](../azure-netapp-files/dynamic-change-volume-service-level.md) -This has no impact to the Datastore or private cloud as there is no downtime involved and the IP address/mount path remain unchanged. However, the volume Resource ID will be changed due to the capacity pool change. Therefore to avoid any metadata mismatch re-issue the datastore create command via Azure CLI as follows: `az vmware datastore netapp-volume create`. +Based on the performance requirements of the datastore, you can change the service level of the Azure NetApp Files volume used for the datastore by following the instructions to [dynamically change the service level of a volume for Azure NetApp Files](../azure-netapp-files/dynamic-change-volume-service-level.md). +Changing the service level has no impact on the datastore or private cloud. There is no downtime and the volume's IP address/mount path remain unchanged. However, the volume's resource ID will change as a result of the capacity pool change. To correct any metadata mismatch, re-run the datastore creation in Azure CLI for the existing datastore with the new Resource ID for the Azure NetApp Files volume: +```azurecli +az vmware datastore netapp-volume create \ + --name <name of existing datastore> \ + --resource-group <resource group containing AVS private cloud> \ + --cluster <cluster name in AVS private cloud> \ + --private-cloud <name of AVS private cloud> \ + --volume-id /subscriptions/<subscription ID>/resourceGroups/<resource group>/providers/Microsoft.NetApp/netAppAccounts/<NetApp account>/capacityPools/<changed capacity pool>/volumes/<volume name> +``` >[!IMPORTANT] -> The input values for **cluster** name, datastore **name**, **private-cloud** (SDDC) name, and **resource-group** must be **exactly the same as the current one**, and the **volume-id** is the new Resource ID of the volume. -- -**cluster** - -**name** - -**private-cloud** - -**resource-group** - -**volume-id** +> The parameters for datastore **name**, **resource-group**, **cluster**, and **private-cloud** (SDDC) must be **exactly the same as those on the existing datastore in the private cloud**. The **volume-id** is the updated Resource ID of the Azure NetApp Files volume after the service level change. ## Disconnect an Azure NetApp Files-based datastore from your private cloud |
backup | Backup Azure Enhanced Soft Delete About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md | Title: Overview of enhanced soft delete for Azure Backup (preview) description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 12/13/2022 Last updated : 03/06/2023 Enhanced soft delete is currently available in the following regions: East US, W - Enhanced soft delete is supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults. - All existing Recovery Services vaults in the preview regions are upgraded with an option to use enhanced soft delete.+- Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups. ## States of soft delete settings |
backup | Blob Backup Configure Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-manage.md | Title: Configure operational backup for Azure Blobs -description: Learn how to configure and manage operational backup for Azure Blobs. - Previously updated : 09/28/2021+ Title: Configure and manage backup for Azure Blobs using Azure Backup +description: Learn how to configure and manage operational and vaulted backups for Azure Blobs. + Last updated : 02/20/2023+ -# Configure operational backup for Azure Blobs +# Configure and manage backup for Azure Blobs using Azure Backup -Azure Backup lets you easily configure operational backup for protecting block blobs in your storage accounts. This article explains how to configure operational backup on one or more storage accounts using the Azure portal. The article discusses the following: --- Things to know before you start-- Creating a Backup Vault-- Granting permissions to the Backup Vault on the storage accounts to be protected-- Creating a Backup policy-- Configuring operational backup on one or more storage accounts-- Effects on the backup up storage accounts+Azure Backup allows you to configure operational and vaulted backups to protect block blobs in your storage accounts. This article describes how to configure and manage backups on one or more storage accounts using the Azure portal. ## Before you start -- Operational backup of blobs is a local backup solution that maintains data for a specified duration in the source storage account itself. This solution doesn't maintain an additional copy of data in the vault.-- This solution allows you to retain your data for restore for up to 360 days. Long retention durations may, however, lead to longer time taken during the restore operation.+# [Operational backup](#tab/operational-backup) ++- Operational backup of blobs is a local backup solution that maintains data for a specified duration in the source storage account itself. This solution doesn't maintain an additional copy of data in the vault. This solution allows you to retain your data for restore for up to 360 days. Long retention durations may, however, lead to longer time taken during the restore operation. - The solution can be used to perform restores to the source storage account only and may result in data being overwritten.-- If you delete a container from the storage account by calling the Delete Container operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers, in addition to operational backup, to protect against accidental deletion of containers.+- If you delete a container from the storage account by calling the *Delete Container operation*, that container can't be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers, in addition to operational backup, to protect against accidental deletion of containers. - Ensure that the **Microsoft.DataProtection** provider is registered for your subscription.-- Refer to the [support matrix](blob-backup-support-matrix.md) to learn more about the supported scenarios, limitations, and availability.++For more information about the supported scenarios, limitations, and availability, see the [support matrix](blob-backup-support-matrix.md). ++# [Vaulted backup](#tab/vaulted-backup) ++- Vaulted backup of blobs is a managed offsite backup solution that transfers data to the backup vault and retains as per the retention configured in the backup policy. You can retain data for a maximum of *10 years*. +- Currently, you can use the vaulted backup solution to restore data to a different storage account only. While performing restores, ensure that the target storage account doesn't contain any *containers* with the same name as those backed up in a recovery point. If any conflicts arise due to the same name of containers, the restore operation fails. ++For more information about the supported scenarios, limitations, and availability, See the [support matrix](blob-backup-support-matrix.md). ++ ## Create a Backup vault To assign the required role for storage accounts that you need to protect, follo >[!NOTE] >You can also assign the roles to the vault at the Subscription or Resource Group levels according to your convenience. -1. In the storage account that needs to be protected, navigate to the **Access Control (IAM)** tab on the left navigation pane. +1. In the storage account that needs to be protected, go to the **Access Control (IAM)** tab on the left navigation pane. 1. Select **Add role assignments** to assign the required role.  To assign the required role for storage accounts that you need to protect, follo ## Create a backup policy -A backup policy typically governs the retention and schedule of your backups. Since operational backup for blobs is continuous in nature, you don't need a schedule to perform backups. The policy is essentially needed to specify the retention period. You can use and reuse the backup policy to configure backup for multiple storage accounts to a vault. +A backup policy defines the schedule and frequency of the recovery points creation, and its retention duration in the Backup vault. You can use a single backup policy for your vaulted backup, operational backup, or both. You can use the same backup policy to configure backup for multiple storage accounts to a vault. ++To create a backup policy, follow these steps: -Here are the steps to create a backup policy for operational backup of your blobs: +1. Go to **Backup center**, and then select **+ Policy**. This takes you to the create policy experience. -1. In your Backup vault, navigate to **Backup policies** and select **+Add** to start creating a backup policy. +2. Select the *data source type* as **Azure Blobs (Azure Storage)**, and then select **Continue**. -  +3. On the **Basics** tab, enter a name for the policy and select the vault you want this policy to be associated with. -1. In the **Basics** tab, provide a name for your backup policy and select **Azure Blobs** as the datasource type. You can also view the details for your selected vault. + You can view the details of the selected vault in this tab, and then select **continue**. + +4. On the **Schedule + retention** tab, enter the *backup details* of the data store, schedule, and retention for these data stores, as applicable. -  + 1. To use the backup policy for vaulted backups, operational backups, or both, select the corresponding checkboxes. + 1. For each data store you selected, add or edit the schedule and retention settings: + - **vaulted backups**: Choose the frequency of backups between *daily* and *weekly*, specify the schedule when the backup recovery points need to be created, and then edit the default retention rule (selecting **Edit**) or add new rules to specify the retention of recovery points using a *grandparent-parent-child* notation. + - **Operational backups**: These are continuous and don't require a schedule. Edit the default rule for operational backups to specify the required retention. - >[!NOTE] - >Although you'll see the **Backup storage redundancy** of the vault, the redundancy doesn't really apply to the operational backup of blobs since the backup is local in nature and no data is stored in the Backup vault. The Backup vault here is the management entity to help you manage the protection of block blobs in your storage accounts. +5. Go to **Review and create**. +6. Once the review is complete, select **Create**. -1. The **Backup policy** tab is where you specify the retention details. You'll see there's already a retention rule called **Default** with a retention period of 30 days. If you want to edit the retention duration, use the **edit retention rule** icon to edit and specify the duration for which you want the data to be retained. You can specify retention up to 360 days. +## Configure backups -  +You can configure backup for one or more storage accounts in an Azure region if you want them to back up to the same vault using a single backup policy. - >[!NOTE] - >Restoring over long durations may lead to restore operations taking longer to complete. Furthermore, the time that it takes to restore a set of data is based on the number of write and delete operations made during the restore period. For example, an account with one million objects with 3,000 objects added per day and 1,000 objects deleted per day will require approximately two hours to restore to a point 30 days in the past. A retention period and restoration more than 90 days in the past would not be recommended for an account with this rate of change. +To configure backup for storage accounts, follow these steps: ++1. Go to **Backup center** > **Overview**, and then select **+ Backup**. ++2. On the **Initiate: Configure Backup** tab, choose **Azure Blobs (Azure Storage)** as the **Datasource type**. ++3. On the **Basics** tab, specify **Azure Blobs (Azure Storage)** as the **Datasource type**, and then select the *Backup vault* that you want to associate with your storage accounts. ++ You can view details of the selected vault on this tab, and then select **Next**. + +4. Select the *backup policy* that you want to use for retention. ++ You can view the details of the selected policy. You can also create a new backup policy, if needed. Once done, select **Next**. -1. In the **Review + create** pane, verify all details for the policy, and select **Create** once done to finish creating the policy. A notification will confirm once the Backup policy has been created and is ready to be used. +5. On the **Datasources** tab, select the *storage accounts* you want to back up. -  + You can select multiple storage accounts in the region to back up using the selected policy. Search or filter the storage accounts, if required. -## Configure backup + When you select the storage accounts, Azure Backup performs the following validations to ensure all prerequisites are met. The **Backup readiness** column shows if the Backup vault has enough permissions to configure backups for each storage account. -Backup of blobs is configured at the storage account level. So, all blobs in the storage account are protected with operational backup. + 1. Validates that the Backup vault has the required permissions to configure backup (the vault has the **Storage account backup contributor** role on all the selected storage accounts. If validation shows errors, then the selected storage accounts don't have **Storage account backup contributor** role. You can assign the required role, based on your current permissions. The error message helps you understand if you have the required permissions, and take the appropriate action: -You can configure backup for multiple storage accounts using the Backup Center. You can also configure backup for a storage account using the storage accountΓÇÖs **Data Protection** properties. This section discusses both the ways to configure backup. + - **Role assignment not done**: This indicates that you (the user) have permissions to assign the **Storage account backup contributor** role and the other required roles for the storage account to the vault. ++ Select the roles, and then select **Assign missing roles** on the toolbar to automatically assign the required role to the Backup vault, and trigger an auto-revalidation. ++ The role propagation may take some time (up to 10 minutes) causing the revalidation to fail. In this scenario, you need to wait for a few minutes and select **Revalidate** to retry validation. ++ - **Insufficient permissions for role assignment**: This indicates that the vault doesn't have the required role to configure backups, and you (the user) don't have enough permissions to assign the required role. To make the role assignment easier, Azure Backup allows you to download the role assignment template, which you can share with users with permissions to assign roles for storage accounts. ++ To do this, select the storage accounts, and then select **Download role assignment template** to download the template. Once the role assignments are complete, select **Revalidate** to validate the permissions again, and then configure backup. ++ >[!Note] + >The template contains details for selected storage accounts only. So, if there are multiple users that need to assign roles for different storage accounts, you can select and download different templates accordingly. ++ 1. Validates that the number of containers to be backed up is less than *100*. By default, all containers are selected; however, you can exclude containers that shouldn't be backed up. If your storage account has *>100* containers, you must exclude containers to reduce the count to *100 or below*. ++ >[!Note] + >The storage accounts to be backed up must contain at least *1 container*. If the selected storage account doesn't contain any containers or if no containers are selected, you may get an error while configuring backups. ++7. Once validation succeeds, open the **Review and configure** tab. ++8. Review the details on the **Review + configure** tab and select **Next** to initiate the *configure backup* operation. ++You'll receive notifications about the status of configuring protection and its completion. ### Using Backup Center To start configuring backup: 1. Search for **Backup Center** in the search bar. -1. Navigate to **Overview** -> **+Backup**. +1. Go to **Overview** > **+Backup**.  To start configuring backup: 1. Select **Review + create** to create the backup policy. -1. Choose the required storage accounts for configuring protection of blobs. You can choose multiple storage accounts at once and choose Select.<br></br>However, ensure that the vault you have chosen has the required Azure role-based access control (Azure RBAC) role assigned to configure backup on storage accounts. Learn more about [Grant permissions to the Backup vault on storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts).<br></br>If the role is not assigned, you can still assign the role while configuring backup. See step 7. +1. Choose the required storage accounts for configuring protection of blobs. You can choose multiple storage accounts at once and choose Select.<br></br>However, ensure that the vault you have chosen has the required Azure role-based access control (Azure RBAC) role assigned to configure backup on storage accounts. Learn more about [Grant permissions to the Backup vault on storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts).<br></br>If the role isn't assigned, you can still assign the role while configuring backup. See step 7.  To start configuring backup:  - If validation displays errors (for two of the storage accounts listed in the figure above), you have not assigned the **Storage account backup contributor** role for these [storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts). Also, you can assign the required role here, based on your current permissions. The error message can help you understand if you have the required permissions, and take the appropriate action: + If validation displays errors (for two of the storage accounts), you haven't assigned the **Storage account backup contributor** role for these [storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts). Also, you can assign the required role here, based on your current permissions. The error message can help you understand if you have the required permissions, and take the appropriate action: - - **Role assignment not done:** This error (as shown for the item _blobbackupdemo3_ in the figure above) indicates that you (the user) have permissions to assign the **Storage account backup contributor** role and the other required roles for the storage account to the vault. Select the roles, and click **Assign missing roles** on the toolbar. This will automatically assign the required role to the backup vault, and also trigger an auto-revalidation.<br><br>At times, role propagation may take a while (up to 10 minutes) causing the revalidation to fail. In such scenario, please wait for a few minutes and click the ΓÇÿRevalidateΓÇÖ button retry validation. + - **Role assignment not done:** This error (as shown for the item _blobbackupdemo3_ in the figure above) indicates that you (the user) have permissions to assign the **Storage account backup contributor** role and the other required roles for the storage account to the vault. Select the roles, and select **Assign missing roles** on the toolbar. This will automatically assign the required role to the backup vault, and also trigger an auto-revalidation.<br><br>At times, role propagation may take a while (up to 10 minutes) causing the revalidation to fail. In such scenario, please wait for a few minutes and select the ΓÇÿRevalidateΓÇÖ button retry validation. - - **Insufficient permissions for role assignment:** This error (as shown for the item _blobbackupdemo4_ in the figure above) indicates that the vault doesnΓÇÖt have the required role to configure backup, and you (the user) donΓÇÖt have enough permissions to assign the required role. To make the role assignment easier, Backup allows you to download the role assignment template, which you can share with users with permissions to assign roles for storage accounts. To do this, select such storage accounts, and click **Download role assignment template** to download the template.<br><br>Once the roles are assigned, you can share it with the appropriate users. On successful assignment of the role, click **Revalidate** to validate permissions again, and then configure backup. + - **Insufficient permissions for role assignment:** This error (as shown for the item _blobbackupdemo4_ in the figure above) indicates that the vault doesnΓÇÖt have the required role to configure backup, and you (the user) donΓÇÖt have enough permissions to assign the required role. To make the role assignment easier, Backup allows you to download the role assignment template, which you can share with users with permissions to assign roles for storage accounts. To do this, select such storage accounts, and select **Download role assignment template** to download the template.<br><br>Once the roles are assigned, you can share it with the appropriate users. On successful assignment of the role, select **Revalidate** to validate permissions again, and then configure backup. >[!NOTE] >The template would only contain details for selected storage accounts. So, if there are multiple users that need to assign roles for different storage accounts, you can select and download different templates accordingly.-1. Once the validation is successful for all selected storage accounts, continue to **Review and configure** backup.<br><br>You'll receive notifications about the status of configuring protection and its completion. +1. Once the validation is successful for all selected storage accounts, continue to **Review and configure backup**. ++You'll receive notifications about the status of configuring protection and its completion. -### Using Data protection settings of the storage account +### Using Data protection settings of the storage account to configure operational backup You can configure backup for blobs in a storage account directly from the ΓÇÿData ProtectionΓÇÖ settings of the storage account. -1. Go to the storage account for which you want to configure backup for blobs, and then navigate to **Data Protection** in left pane (under **Data management**). +1. Go to the storage account for which you want to configure backup for blobs, and then go to **Data Protection** in left pane (under **Data management**). 1. In the available data protection options, the first one allows you to enable operational backup using Azure Backup.  -1. Select the check box corresponding to **Enable operational backup with Azure Backup**. Then select the Backup vault and the Backup policy you want to associate.<br><br>You can select the existing vault and policy, or create new ones, as required. +1. Select the checkbox corresponding to **Enable operational backup with Azure Backup**. Then select the Backup vault and the Backup policy you want to associate. + You can select the existing vault and policy, or create new ones, as required. >[!IMPORTANT] >You should have assigned the **Storage account backup contributor** role to the selected vault. Learn more about [Grant permissions to the Backup vault on storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts). - - If you have already assigned the required role, click **Save** to finish configuring backup. Follow the portal notifications to track the progress of configuring backup. - - If you havenΓÇÖt assigned it yet, click **Manage identity** and Follow the steps below to assign the roles. + - If you've already assigned the required role, select **Save** to finish configuring backup. Follow the portal notifications to track the progress of configuring backup. + - If you havenΓÇÖt assigned it yet, select **Manage identity** and Follow the steps below to assign the roles.  - 1. On clicking **Manage identity**, brings you to the Identity blade of the storage account. + 1. On selecting **Manage identity**, brings you to the Identity pane of the storage account. - 1. Click **Add role assignment** to initiate the role assignment. + 1. Select **Add role assignment** to initiate the role assignment.  You can configure backup for blobs in a storage account directly from the ΓÇÿDat  - 1. Click **Save** to finish role assignment.<br><br>You will be notified through the portal once this completes successfully. You can also see the new role added to the list of existing ones for the selected vault. + 1. Select **Save** to finish role assignment. + + You'll receive notification through the portal once this completes successfully. You can also see the new role added to the list of existing ones for the selected vault.  - 1. Click the cancel icon (**x**) on the top right corner to return to the **Data protection** blade of the storage account.<br><br>Once back, continue configuring backup. + 1. Select the cancel icon (**x**) on the top right corner to return to the **Data protection** pane of the storage account.<br><br>Once back, continue configuring backup. ++## Effects on backed-up storage accounts -## Effects on backed up storage accounts +# [Vaulted backup](#tab/vaulted-backup) ++In storage accounts (for which you've configured vaulted backups), the object replication rules get created under the **Object replication** item in the left pane. ++# [Operational backup](#tab/operational-backup) Once backup is configured, changes taking place on block blobs in the storage accounts are tracked and data is retained according to the backup policy. You'll notice the following changes in the storage accounts for which backup is configured: Once backup is configured, changes taking place on block blobs in the storage ac  -## Manage operational backup +++## Manage backups -You can use Backup Center as your single pane of glass for managing all your backups. Regarding operational backup for Azure Blobs, you can use Backup Center to perform the following: +You can use Backup Center as your single pane of glass for managing all your backups. Regarding backup for Azure Blobs, you can use Backup Center to do the following: - As we've seen above, you can use it for creating Backup vaults and policies. You can also view all vaults and policies under the selected subscriptions. - Backup Center gives you an easy way to monitor the state of protection of protected storage accounts as well as storage accounts for which backup isn't currently configured. You can use Backup Center as your single pane of glass for managing all your bac For more information, see [Overview of Backup Center](backup-center-overview.md). -## Stopping protection +## Stop protection You can stop operational backup for your storage account according to your requirement. >[!NOTE]->Stopping protection only dissociates the storage account from the Backup vault (and the Backup tools, such as Backup Center), and doesnΓÇÖt disable blob point-in-time restore, versioning, and change feed that were configured. +>When you remove backups, the **OR policy** isn't removed from the source. So, you need to remove the policy separately. Stopping protection only dissociates the storage account from the Backup vault (and the backup tools, such as Backup center), and doesnΓÇÖt disable blob point-in-time restore, versioning, and change feed that were configured. To stop backup for a storage account, follow these steps: -1. Navigate to the backup instance for the storage account being backed up.<br><br>You can navigate to this from the storage account via **Storage account** -> **Data protection** -> **Manage backup settings**, or directly from the Backup Center via **Backup Center** -> **Backup instances** -> search for the storage account name. +1. Go to the backup instance for the storage account being backed up.<br><br>You can go to this from the storage account via **Storage account** -> **Data protection** -> **Manage backup settings**, or directly from the Backup Center via **Backup Center** -> **Backup instances** -> search for the storage account name.   -1. In the backup instance, click **Delete** to stop operational backup for the particular storage account. +1. In the backup instance, select **Delete** to stop operational backup for the particular storage account.  -After stopping backup, you may disable other storage data protection capabilities (that are enabled for configuring backup) from the data protection blade of the storage account. +After stopping backup, you may disable other storage data protection capabilities (that are enabled for configuring backup) from the data protection pane of the storage account. ## Next steps |
backup | Blob Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-overview.md | Title: Overview of operational backup for Azure Blobs -description: Learn about operational backup for Azure Blobs. + Title: Overview of Azure Blobs backup +description: Learn about Azure Blobs backup. Previously updated : 05/05/2021 Last updated : 02/15/2023+ -# Overview of operational backup for Azure Blobs +# Overview of Azure Blob backup -Operational backup for Blobs is a managed, local data protection solution that lets you protect your block blobs from various data loss scenarios like corruptions, blob deletions, and accidental storage account deletion. The data is stored locally within the source storage account itself and can be recovered to a selected point in time whenever needed. So it provides a simple, secure, and cost-effective means to protect your blobs. +Azure Backup provides a simple, secure, cost-effective, and cloud-based backup solution to protect your business or application-critical data stored in Azure Blob. -Operational backup for Blobs integrates with [Backup Center](backup-center-overview.md), among other Backup management capabilities, to provide a single pane of glass that can help you govern, monitor, operate, and analyze backups at scale. +This article gives you an understanding about configuring the following types of backups for your blobs: -## How operational backup works +- **Continuous backups**: You can configure operational backup, a managed local data protection solution, to protect your block blobs from accidental deletion or corruption. The data is stored locally within the source storage account and not transferred to the backup vault. You donΓÇÖt need to define any schedule for backups. All changes are retained, and you can restore them from the state at a selected point in time. -Operational backup of blobs is a **local backup** solution. So the backup data isn't transferred to the Backup vault, but is stored in the source storage account itself. However, the Backup vault still serves as the unit of managing backups. Also, this is a **continuous backup** solution, which means that you donΓÇÖt need to schedule any backups and all changes will be retained and restorable from the state at a selected point in time. +- **Periodic backups (preview)**: You can configure vaulted backup, a managed offsite data protection solution, to get protection against any accidental or malicious deletion of blobs or storage account. The backup data using vaulted backups is copied and stored in the Backup vault as per the schedule and frequency you define via the backup policy and retained as per the retention configured in the policy. ++You can choose to configure vaulted backups, operational backups, or both on your storage accounts using a single backup policy. The integration with [Backup center](backup-center-overview.md) enables you to govern, monitor, operate, and analyze backups at scale. ++## How the operational backup works? Operational backup uses blob platform capabilities to protect your data and allow recovery when required: Operational backup uses blob platform capabilities to protect your data and allo - **Delete lock**: Delete lock prevents the storage account from being deleted accidentally or by unauthorized users. Operational backup when configured also automatically applies a delete lock to reduce the possibilities of data loss because of storage account deletion. -Refer to the [support matrix](blob-backup-support-matrix.md) to learn about the limitations of the current solution. +For information about the limitations of the current solution, see the [support matrix](blob-backup-support-matrix.md). ++## How the vaulted backup works? ++Vaulted backup (preview) uses the platform capability of object replication to copy data to the Backup vault. Object replication asynchronously copies block blobs between a source storage account and a destination storage account. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container. ++When you configure protection, Azure Backup allocates a destination storage account (Backup vault's storage account managed by Azure Backup) and enables object replication policy at container level on both destination and source storage account. When a backup job is triggered, the Azure Backup service creates a recovery point marker on the source storage account and polls the destination account for the recovery point marker replication. When the data transfer completes, the recovery point marker is replicated. Once the replication point marker is present on the destination, a recovery point is created. ++For information about the limitations of the current solution, see the [support matrix](blob-backup-support-matrix.md). ++## Protection -### Protection +### Protection using operational backup Operational backup is configured and managed at the **storage account** level, and applies to all block blobs within the storage account. Operational backup uses a **backup policy** to manage the duration for which the backup data (including older versions and deleted blobs) is to be retained, in that way defining the period up to which you can restore your data from. The backup policy can have a maximum retention of 360 days, or equivalent number of complete weeks (51) or months (11). When you configure backup for a storage account and assign a backup policy with a retention of ΓÇÿnΓÇÖ days, the underlying properties are set as described below. You can view these properties in the **Data protection** tab of the blob service in your storage account. -- **Point-in-time restore**: Set to ΓÇÿnΓÇÖ days, as defined in the backup policy. If the storage account already had point-in-time enabled with a retention of, say ΓÇÿxΓÇÖ days, before configuring backup, the point-in-time restore duration will be set to the greater of the two values, that is max(n,x). If you had already enabled point-in-time restore and specified the retention to be greater than that in the backup policy, it will remain unchanged.+- **Point-in-time restore**: Set to ΓÇÿnΓÇÖ days, as defined in the backup policy. If the storage account already had point-in-time enabled with a retention of, say ΓÇÿxΓÇÖ days, before configuring backup, the point-in-time restore duration will be set to the greater of the two values that is max(n,x). If you had already enabled point-in-time restore and specified the retention to be greater than that in the backup policy, it will remain unchanged. -- **Soft delete**: Set to ΓÇÿn+5ΓÇÖ days, that is, five days in addition to the duration specified in the backup policy. If the storage account that is being configured for operational backup already had soft delete enabled with a retention of, say ΓÇÿyΓÇÖ days, then the soft delete retention will be set to the maximum of the two values, that is, max(n+5,y). If you had already enabled soft delete and specified the retention to be greater than that according to the backup policy, it will remain unchanged.+- **Soft delete**: Set to ΓÇÿn+5ΓÇÖ days, that is, five days in addition to the duration specified in the backup policy. If the storage account that is being configured for operational backup already had soft delete enabled with a retention of, say ΓÇÿyΓÇÖ days, then the soft delete retention will be set to the maximum of the two values, that is, maximum (n+5, y). If you had already enabled soft delete and specified the retention to be greater than that according to the backup policy, it will remain unchanged. - **Versioning for blobs and blob change feed**: Versioning and change feed are enabled for storage accounts that have been configured for operational backup. To allow Backup to enable these properties on the storage accounts to be protect >[!NOTE] >Operational backup supports operations on block blobs only and operations on containers canΓÇÖt be restored. If you delete a container from the storage account by calling the **Delete Container** operation, that container canΓÇÖt be restored with a restore operation. ItΓÇÖs suggested you enable soft delete to enhance data protection and recovery. -### Management +### Protection using vaulted backup (in preview) ++Vaulted backup is configured at the storage account level. However, you can exclude containers that don't need backup. If your storage account has *>100* containers, you need to mandatorily exclude containers to reduce the count to *100* or below. For vaulted backups, the schedule and retention are managed via backup policy. You can set the frequency as *daily* or *weekly*, and specify when the backup recovery points need to be created. You can also configure different retention values for backups taken every day, week, month, or year. The retention rules are evaluated in a pre-determined order of priority. The *yearly* rule has the priority compared to *monthly* and *weekly* rule. Default retention settings are applied if other rules don't qualify. ++In storage accounts (for which vaulted backups are configured), the object replication rules get created under the *object replication* item on the *TOC* blade of the source storage account. ++You can enable operational backup and vaulted backup (or both) of blobs on a storage account that is independent of each other using the same backup policy. The vaulted blob backup solution allows you to retain your data for up to *10 years*. Restoring data from older recovery points may lead to longer time taken (longer RTO) during the restore operation. You can currently use the vaulted backup solution to perform restores to a different storage account only. For restoring to the same account, you may use operational backups. ++## Management Once you have enabled backup on a storage account, a Backup Instance is created corresponding to the storage account in the Backup vault. You can perform any Backup-related operations for a storage account like initiating restores, monitoring, stopping protection, and so on, through its corresponding Backup Instance. -Operational backup also integrates directly with Backup Center to help you manage the protection of all your storage accounts centrally, along with all other Backup supported workloads. Backup Center is your single pane of glass for all your Backup requirements like monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to backup and restore of data. +Both operational and vaulted backups integrate directly with Backup Center to help you manage the protection of all your storage accounts centrally, along with all other Backup supported workloads. Backup Center is your single pane of glass for all your Backup requirements like monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to back up and restore of data. -### Restore +## Restore You can restore data from any point in time for which a recovery point exists. A recovery point is created when a storage account is in protected state, and can be used to restore data as long as it falls in the retention period defined by the backup policy (and so the point-in-time restore capability of the blob service in the storage account). Operational backup uses blob point-in-time restore to restore data from a recovery point. Operational backup gives you the option to restore all block blobs in the storag ## Pricing -You won't incur any management charges or instance fee when using operational backup for blobs. However, you will incur the following charges: +### Operational backup ++You won't incur any management charges or instance fee when using operational backup for blobs. However, you'll incur the following charges: - Restores are done using blob point-in-time restore and attract charges based on the amount of data processed. For more information, see [point-in-time restore pricing](../storage/blobs/point-in-time-restore-overview.md#pricing-and-billing). - Retention of data because of [Soft delete for blobs](../storage/blobs/soft-delete-blob-overview.md), [Change feed support in Azure Blob Storage](../storage/blobs/storage-blob-change-feed.md), and [Blob versioning](../storage/blobs/versioning-overview.md). +### Vaulted backup (preview) ++Vaulted backup currently doesn't incur any charges with preview release. + ## Next steps - [Configure and manage Azure Blobs backup](blob-backup-configure-manage.md) |
backup | Blob Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md | Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 10/07/2021 Last updated : 02/20/2023 + # Support matrix for Azure Blobs backup -This article summarizes the regional availability, supported scenarios, and limitations of operational backup of blobs. +This article summarizes the regional availability, supported scenarios, and limitations of operational and vaulted backups of blobs. ## Supported regions +**Choose a backup type** ++# [Operational backup](#tab/operational-backup) + Operational backup for blobs is available in all public cloud regions, except France South and South Africa West. It's also available in sovereign cloud regions - all Azure Government regions and China regions (except China East). +# [Vaulted backup](#tab/vaulted-backup) ++Vaulted backup (preview) for blobs is currently available in the following regions: France Central, Canada Central, Canada East, US East, and US South. +++ ## Limitations +**Choose a backup type** ++# [Operational backup](#tab/operational-backup) + Operational backup of blobs uses blob point-in-time restore, blob versioning, soft delete for blobs, change feed for blobs and delete lock to provide a local backup solution. So limitations that apply to these capabilities also apply to operational backup. **Supported scenarios:** Operational backup supports block blobs in standard general-purpose v2 storage accounts only. Storage accounts with hierarchical namespace enabled (that is, ADLS Gen2 accounts) aren't supported. <br><br> Also, any page blobs, append blobs, and premium blobs in your storage account won't be restored and only block blobs will be restored. Operational backup of blobs uses blob point-in-time restore, blob versioning, so - A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation. - A blob with an active lease can't be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state.-- If there're [immutable blobs](../storage/blobs/immutable-storage-overview.md#about-immutable-storage-for-blobs) among those being restored, such immutable blobs won't be restored to their state as per the selected recovery point. However, other blobs that don't have immutability enabled will be restored to the selected recovery point as expected.+- If there are [immutable blobs](../storage/blobs/immutable-storage-overview.md#about-immutable-storage-for-blobs) among those being restored, such immutable blobs won't be restored to their state as per the selected recovery point. However, other blobs that don't have immutability enabled will be restored to the selected recovery point as expected. ++# [Vaulted backup](#tab/vaulted-backup) +The vaulted backup is currently in preview in the following regions: France Central, Canada Central, Canada East, US East, US South. ++- You can back up only block blobs in a *standard general-purpose v2 storage account* using the vaulted backup solution for blobs. +- HNS-enabled storage accounts are currently not supported. This includes *ADLS Gen2 accounts*, *accounts using NFS 3.0*, and *SFTP protocols* for blobs. +- You can back up storage accounts with *up to 100 containers*. You can also select a subset of containers to back up (up to 100 containers). + - If your storage account contains more than 100 containers, you need to select *up to 100 containers* to back up. + - To back up any new containers that get created after backup configuration for the storage account, modify the protection of the storage account. These containers aren't backed up automatically. +- The storage accounts to be backed up must contain *a minimum of 1 container*. If the storage account doesn't contain any containers or if no containers are selected, an error may appear when you configure backup. +- Currently, you can perform only *one backup* per day (that includes scheduled and on-demand backups). Backup fails if you attempt to perform more than one backup operation a day. +- If you stop protection (vaulted backup) on a storage account, it doesn't delete the object replication policy created on the storage account. In these scenarios, you need to manually delete the *OR policies*. +- Cool and archived blobs are currently not supported. ++ ## Next steps -[Overview of operational backup for Azure Blobs](blob-backup-overview.md) +[Overview of Azure Blobs backup for Azure Blobs](blob-backup-overview.md) |
backup | Blob Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-restore.md | Title: Restore Azure Blobs description: Learn how to restore Azure Blobs.- Previously updated : 03/11/2022+ Last updated : 02/20/2023 # Restore Azure Blobs -Block blobs in storage accounts with operational backup configured can be restored to any point in time within the retention range. Also, you can scope your restores to all block blobs in the storage account or to a subset of blobs. +This article describes how to use the Azure portal to perform restores for Azure Blob from operational or vaulted backups. With operational backups, you can restore all block blobs in storage accounts with operational backup configured or a subset of blob content to any point-in-time within the retention range. With vaulted backups, you can perform restores using a recovery point created, based on your backup schedule. ## Before you start +# [Operational backup](#tab/operational-backup) + - Blobs will be restored to the same storage account. So blobs that have undergone changes since the time to which you're restoring will be overwritten. - Only block blobs in a standard general-purpose v2 storage account can be restored as part of a restore operation. Append blobs, page blobs, and premium block blobs aren't restored. - When you perform a restore operation, Azure Storage blocks data operations on the blobs in the ranges being restored for the duration of the operation. - If a blob with an active lease is included in the range to restore, and if the current version of the leased blob is different from the previous version at the timestamp provided for PITR, the restore operation will fail atomically. We recommend breaking any active leases before initiating the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state.-- If you delete a container from the storage account by calling the **Delete Container** operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers in addition to operational backup to protect against accidental deletion of containers.+- If you delete a container from the storage account by calling the **Delete Container** operation, that container can't be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers in addition to operational backup to protect against accidental deletion of containers. - Refer to the [support matrix](blob-backup-support-matrix.md) for all limitations and supported scenarios. +# [Vaulted backup](#tab/vaulted-backup) ++- Vaulted backups only support restoring data to another storage account, which is different from the one that was backed up. +- Ensure that the Backup vault has the *Storage account backup contributor* role assigned to the target storage account to which the backup data needs to be restored. +++ ## Restore blobs -You can initiate a restore through the Backup Center. +To initiate a restore through the Backup center, follow these steps: -1. In Backup Center, go to **Restore** on the top bar. +1. In Backup center, go to **Restore** on the top bar.  -1. In the **Initiate Restore** tab, choose **Azure Blobs (Azure Storage)** as the Datasource type and select the **Backup Instance** you want to restore. The backup instance here is the storage account that contains the blobs you want to restore. +1. On the **Initiate Restore** tab, choose **Azure Blobs (Azure Storage)** as the Datasource type and select the **Backup Instance** you want to restore. The backup instance is the storage account that contains the blobs you want to restore.  -1. In the **Select recovery point** tab, choose the date and time you want to restore your data from. You can also use the slider to choose the point in time to restore from. The info bubble next to the date shows the valid duration from which you can restore your data. Operational backup for blobs being continuous backup gives you granular control over points to recover data from. +1. On the **Select recovery point** tab, select the type of backup you want to restore. - >[!NOTE] - > The time depicted here is your local time. + - For operational backup, choose the date and time you want to restore your data. You can also use the slider to choose the point-in-time to restore from. The restoration details appear next to the date, which shows the valid duration from which you can restore your data. Operational backup for blobs is a continuous backup and gives granular control over points to recover data from. -  + - For vaulted backup, choose a recovery point from which you want to perform the restore. + + >[!NOTE] + > The time mentioned here is your local time. -1. In the **Restore parameters** tab, choose whether you want to restore all blobs in the storage account, specific containers, or a subset of blobs using prefix match. When using prefix match, you can specify up to 10 ranges of prefixes or filepaths. More details on using prefix match [here](#use-prefix-match-for-restoring-blobs). +1. On the **Restore parameters** tab, select the options based on the type of backups you've chosen to perform restore.  - Choose one of these options: + For **operational backup**, choose one of these options: - **Restore all blobs in the storage account**: Using this option restores all block blobs in the storage account by rolling them back to the selected point in time. Storage accounts containing large amounts of data or witnessing a high churn may take longer times to restore. You can initiate a restore through the Backup Center. For more information on using prefixes to restore blob ranges, see [this section](#use-prefix-match-for-restoring-blobs). + For vaulted backup, choose one of these options: ++ - **Restore all backed-up containers**: Use this option to restore all backed-up containers in the storage account. + - **Browse and select containers to restore**: Use this option to browse and select up to **100** containers to restore. You must have sufficient permission to view the containers in the storage account, or you can't see the contents of the storage account. Select the target storage account (and its subscription), that is, the storage account where the data needs to be restored. ++ >[!Note] + >The vault must have the *Storage account backup contributor* role assigned on the target storage account. Select **Validate** to ensure that the required permissions to perform the restore are assigned. Once done, proceed to the next tab. + 1. Once you finish specifying what blobs to restore, continue to the **Review + restore** tab, and select **Restore** to initiate the restore. 1. **Track restore**: Use the **Backup Jobs** view to track the details and status of restores. To do this, navigate to **Backup Center** > **Backup Jobs**. The status will show **In progress** while the restore is being performed. The restore operation shown in the image performs the following actions: - It restores blobs in the lexicographical range *blob1* through *blob5* in *container2*. This range restores blobs with names such as *blob1*, *blob11*, *blob100*, *blob2*, and so on. Because the end of the range is exclusive, it restores blobs whose names begin with *blob4*, but doesn't restore blobs whose names begin with *blob5*. - It restores all blobs in *container3* and *container4*. Because the end of the range is exclusive, this range doesn't restore *container5*. +>[!Note] +>This capability is currently supported only for operational backups. + ## Next steps - [Overview of operational backup for Azure Blobs](blob-backup-overview.md) |
backup | Delete Recovery Services Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/delete-recovery-services-vault.md | Title: Script Sample - Delete a Recovery Services vault description: Learn about how to use a PowerShell script to delete a Recovery Services vault. Previously updated : 01/30/2022 Last updated : 03/06/2023 $backupItemsVM = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM $backupItemsSQL = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID $backupItemsAFS = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID $backupItemsSAP = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID-$backupContainersSQL = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"} +$backupContainersSQL = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"} $protectableItemsSQL = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}-$backupContainersSAP = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"} -$StorageAccounts = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID +$backupContainersSAP = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"} +$StorageAccounts = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -VaultId $VaultToDelete.ID $backupServersMARS = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID $backupServersMABS = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" } $backupServersDPM = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" } foreach($item in $backupItemsSQL) } Write-Host "Disabled and deleted SQL Server backup items" -foreach($item in $protectableItems) +foreach($item in $protectableItemsSQL) { Disable-AzRecoveryServicesBackupAutoProtection -BackupManagementType AzureWorkload -WorkloadType MSSQL -InputItem $item -VaultId $VaultToDelete.ID #disable auto-protection for SQL } foreach($item in $pvtendpoints) { $penamesplit = $item.Name.Split(".") $pename = $penamesplit[0]- Remove-AzPrivateEndpointConnection -ResourceId $item.PrivateEndpoint.Id -Force #remove private endpoint connections + Remove-AzPrivateEndpointConnection -ResourceId $item.Id -Force #remove private endpoint connections Remove-AzPrivateEndpoint -Name $pename -ResourceGroupName $ResourceGroup -Force #remove private endpoints } Write-Host "Removed Private Endpoints" if ($null -ne $fabricObjects) { #Recheck presence of backup items in vault $backupItemsVMFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID $backupItemsSQLFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID-$backupContainersSQLFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"} +$backupContainersSQLFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"} $protectableItemsSQLFin = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true} $backupItemsSAPFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID-$backupContainersSAPFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"} +$backupContainersSAPFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"} $backupItemsAFSFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID-$StorageAccountsFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID +$StorageAccountsFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -VaultId $VaultToDelete.ID $backupServersMARSFin = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID $backupServersMABSFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" } $backupServersDPMFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" } |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 10/14/2022 Last updated : 02/20/2023 You can learn more about the new releases by bookmarking this page or by [subscr ## Updates summary +- February 2023 + - [Azure Blob vaulted backups (preview)](#azure-blob-vaulted-backups-preview) - October 2022 - [Multi-user authorization using Resource Guard for Backup vault (in preview)](#multi-user-authorization-using-resource-guard-for-backup-vault-in-preview) - [Enhanced soft delete for Azure Backup (preview)](#enhanced-soft-delete-for-azure-backup-preview) You can learn more about the new releases by bookmarking this page or by [subscr - February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) +## Azure Blob vaulted backups (preview) ++Azure Backup now enables you to perform a vaulted backup of block blob data in *general-purpose v2 storage accounts* to protect data against ransomware attacks or source data loss due to malicious or rogue admin. You can define the backup schedule to create recovery points and the retention settings that determine how long backups will be retained in the vault. You can configure and manage the vaulted and operational backups using a single backup policy. ++Under vaulted backups, the data is copied and stored in the Backup vault. So, you get an offsite copy of data that can be retained for up to *10 years*. If any data loss happens on the source account, you can trigger a restore to an alternate account and get access to your data. The vaulted backups can be managed at scale via the Backup center, and monitored via the rich alerting and reporting capabilities offered by the Azure Backup service. ++If you're currently using operational backups, we recommend you to switch to vaulted backups for complete protection against different data loss scenarios. ++For more information, see [Azure Blob backup overview](blob-backup-overview.md). +++ ## Multi-user authorization using Resource Guard for Backup vault (in preview) Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Backup vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization. |
bastion | Connect Native Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md | Use the example that corresponds to the type of target VM to which you want to c az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" ``` +> [!IMPORTANT] +> Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM. + **SSH:** The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example. |
chaos-studio | Chaos Studio Private Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md | VNet is the fundamental building block for your private network in Azure. VNet e VNet injection allows a Chaos resource provider to inject containerized workloads into your VNet so that resources without public endpoints can be accessed via a private IP address on the VNet. To configure VNet injection: -1. Register the `Microsoft.ContainerInstance` resource provider with your subscription (if applicable). +1. Register the `Microsoft.ContainerInstance` and `Microsoft.Relay` resource providers with your subscription (if applicable). ```bash az provider register --namespace 'Microsoft.ContainerInstance' --wait |
cloud-shell | Embed Cloud Shell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/embed-cloud-shell.md | To integrate Cloud Shell's launch button into markdown files by copying the foll Regular sized button ```markdown-[ +[](https://shell.azure.com) ``` Large sized button ```markdown-[ +[](https://shell.azure.com) ``` The location of these image files is subject to change. We recommend that you download the files for |
cognitive-services | Embedded Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md | The Speech SDK for Java doesn't support Windows on ARM64. Embedded speech is only available with C#, C++, and Java SDKs. The other Speech SDKs, Speech CLI, and REST APIs don't support embedded speech. -Embedded speech recognition only supports mono 16 bit, 16-kHz PCM-encoded WAV audio. +Embedded speech recognition only supports mono 16 bit, 8-kHz or 16-kHz PCM-encoded WAV audio. Embedded neural voices only support 24-kHz sample rate. embeddedSpeechConfig.SetSpeechSynthesisVoice( embeddedSpeechConfig.SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm); ``` -You can find ready to use embedded speech samples at [GitHub](https://aka.ms/csspeech/samples). +You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation: -- [C# (.NET 6.0)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/dotnetcore/embedded-speech)-- [C# for Unity](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/unity/embedded-speech)+- [C# (.NET 6.0)](https://aka.ms/embedded-speech-samples-csharp) +- [C# for Unity](https://aka.ms/embedded-speech-samples-csharp-unity) ::: zone-end ::: zone pivot="programming-language-cpp" embeddedSpeechConfig->SetSpeechSynthesisVoice( embeddedSpeechConfig->SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat::Riff24Khz16BitMonoPcm); ``` -You can find ready to use embedded speech samples at [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/cpp/embedded-speech) +You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation: +- [C++](https://aka.ms/embedded-speech-samples-cpp) ::: zone-end ::: zone pivot="programming-language-java" embeddedSpeechConfig.setSpeechSynthesisVoice( embeddedSpeechConfig.setSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm); ``` -You can find ready to use embedded speech samples at [GitHub](https://aka.ms/csspeech/samples). -- [Java (JRE)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/jre/embedded-speech)-- [Java for Android](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/android/embedded-speech)+You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation: +- [Java (JRE)](https://aka.ms/embedded-speech-samples-java) +- [Java for Android](https://aka.ms/embedded-speech-samples-java-android) ::: zone-end |
cognitive-services | Content Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md | The table below outlines the various ways content filtering can appear: ||-| | 200 | In the cases when all generation passes the filter models no content moderation details are added to the response. The finish_reason for each generation will be either stop or length. | -**Example response:** +**Example request payload:** ```json { |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | Each model family has a series of models that are further distinguished by capab Azure OpenAI model names typically correspond to the following standard naming convention: -`{family}-{capability}[-{input-type}]-{identifier}` +`{capability}-{family}[-{input-type}]-{identifier}` | Element | Description | | | |-| `{family}` | The model family of the model. For example, [GPT-3 models](#gpt-3-models) uses `text`, while [Codex models](#codex-models) use `code`.| -| `{capability}` | The relative capability of the model. For example, GPT-3 models include `ada`, `babbage`, `curie`, and `davinci`.| +| `{capability}` | The model capability of the model. For example, [GPT-3 models](#gpt-3-models) uses `text`, while [Codex models](#codex-models) use `code`.| +| `{family}` | The relative family of the model. For example, GPT-3 models include `ada`, `babbage`, `curie`, and `davinci`.| | `{input-type}` | ([Embeddings models](#embeddings-models) only) The input type of the embedding supported by the model. For example, text search embedding models support `doc` and `query`.| | `{identifier}` | The version identifier of the model. | |
cosmos-db | Configure Periodic Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-periodic-backup-restore.md | If you have accidentally deleted or corrupted your data, you should contact [Azu > * VNET access control lists > * Stored procedures, triggers and user-defined functions > * Multi-region settings +> * Managed identity settings + If you provision throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore. |
cosmos-db | Configure Synapse Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md | The first step to use Synapse Link is to enable it for your Azure Cosmos DB data > [!NOTE] > If you want to use Full Fidelity Schema for API for NoSQL accounts, you can't use the Azure portal to enable Synapse Link. This option can't be changed after Synapse Link is enabled in your account and to set it you must use Azure CLI or PowerShell. For more information, check [analytical store schema representation documentation](analytical-store-introduction.md#schema-representation). +> [!NOTE] +> You need [Contributor role](role-based-access-control.md) to enable Synapse Link at account level. And you need at least [Operator role](role-based-access-control.md) to enable Synapse Link in your containers or collections. + ### Azure portal 1. Sign into the [Azure portal](https://portal.azure.com/). |
cosmos-db | How To Setup Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md | Not available ## Restore a continuous account that is configured with managed identity -System identity is tied to one specific account and can't be reused in another account. So, a new user-assigned identity is required during the restore process. +A user-assigned identity is required in the restore request because the source account managed identity (User-assigned and System-assigned identities) cannot be carried over automatically to the target database account. ### [Azure CLI](#tab/azure-cli) Use the Azure CLI to restore a continuous account that is already configured usi > [!NOTE] > This feature is currently under Public Preview and requires Cosmos DB CLI Extension version 0.20.0 or higher. -The newly created user assigned identity is only needed during the restore and can be cleaned up once the restore has completed. First, to restore a source account with system-assigned identity. 1. Create a new user-assigned identity (or use an existing one) for the restore process. The newly created user assigned identity is only needed during the restore and c 1. Once the restore has completed, the target (restored) account will have the user-assigned identity. If desired, user can update the account to use System-Assigned managed identity. -By default, when you trigger a restore for an account with user-assigned managed identity, the user-assigned identity will be passed to the target account automatically. -If desired, the user can also trigger a restore using a different user-assigned identity than the source account by specifying it in the restore parameters. ### [PowerShell / Azure Resource Manager template / Azure portal](#tab/azure-powershell+arm-template+azure-portal) |
cosmos-db | Troubleshoot Request Rate Too Large | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-request-rate-too-large.md | There are different error messages that correspond to different types of 429 exc ## Request rate is large This is the most common scenario. It occurs when the request units consumed by operations on data exceed the provisioned number of RU/s. If you're using manual throughput, this occurs when you've consumed more RU/s than the manual throughput provisioned. If you're using autoscale, this occurs when you've consumed more than the maximum RU/s provisioned. For example, if you have a resource provisioned with manual throughput of 400 RU/s, you'll see 429 when you consume more than 400 request units in a single second. If you have a resource provisioned with autoscale max RU/s of 4000 RU/s (scales between 400 RU/s - 4000 RU/s), you'll see 429 responses when you consume more than 4000 request units in a single second. +> [!TIP] +> All operations are charged based on the number of resources they consume. These charges are measured in request units. These charges include requests that do not complete successfully due to application errors such as `400`, `412`, `449`, etc. While looking at throttling or usage, it is a good idea to investigate if some pattern has changed in your usage which would result in an increase of these operations. Specifically, check for tags `412` or `449` (actual conflict). +> +> For more information about provisioned throughput, see [provisioned throughput in Azure Cosmos DB](../set-throughput.md). + ### Step 1: Check the metrics to determine the percentage of requests with 429 error Seeing 429 error messages doesn't necessarily mean there's a problem with your database or container. A small percentage of 429 responses is normal whether you're using manual or autoscale throughput, and is a sign that you're maximizing the RU/s you've provisioned. |
data-factory | Data Movement Security Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-movement-security-considerations.md | In this article, we review security considerations in the following two data mov - **Store encrypted credentials in an Azure Data Factory managed store**. Data Factory helps protect your data store credentials by encrypting them with certificates managed by Microsoft. These certificates are rotated every two years (which includes certificate renewal and the migration of credentials). For more information about Azure Storage security, see [Azure Storage security overview](../storage/blobs/security-recommendations.md). - **Store credentials in Azure Key Vault**. You can also store the data store's credential in [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). Data Factory retrieves the credential during the execution of an activity. For more information, see [Store credential in Azure Key Vault](store-credentials-in-key-vault.md).-- + Centralizing storage of application secrets in Azure Key Vault allows you to control their distribution. Key Vault greatly reduces the chances that secrets may be accidentally leaked. Instead of storing the connection string in the app's code, you can store it securely in Key Vault. Your applications can securely access the information they need by using URIs. These URIs allow the applications to retrieve specific versions of a secret. There's no need to write custom code to protect any of the secret information stored in Key Vault. ### Data encryption in transit |
data-factory | Solution Template Replicate Multiple Objects Sap Cdc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-replicate-multiple-objects-sap-cdc.md | This article describes a solution template that you can use to replicate multipl ## About this solution template -This template reads an external control file in csv format on your storage store, which contains your SAP ODP contexts, SAP ODP objects and key columns from SAP source system as well as your containers, folders and partitions from Azure Data Lake Gen2 destination store. It then copies each of the SAP ODP object from SAP system to Azure Data Lake Gen2 in Delta format. +This template reads an external control file in json format on your storage store, which contains your SAP ODP contexts, SAP ODP objects and key columns from SAP source system as well as your containers, folders and partitions from Azure Data Lake Gen2 destination store. It then copies each of the SAP ODP object from SAP system to Azure Data Lake Gen2 in Delta format. The template contains three activities: - **Lookup** retrieves the SAP ODP objects list to be loaded and the destination store path from an external control file on your Azure Data Lake Gen2 store. - **ForEach** gets the SAP ODP objects list from the Lookup activity and iterates each object to the mapping dataflow activity. - **Mapping dataflow** replicates each SAP ODP object from SAP system to Azure Data Lake Gen2 in Delta format. It will do initial full load in the first run and then do incremental load in the subsequent runs automatically. It will merge the changes to Azure Data Lake Gen2 in Delta format. -An external control file in csv format is required for in this template. The schema for the control file is as below. -- *context* is your SAP ODP context from the source SAP system. You can get more details [here](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-the-source-dataset).-- *object* is your SAP ODP object name to be loaded from the SAP system. You can get more details [here](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-the-source-dataset).-- *keys* are your key column names from SAP ODP objects used to do the dedupe in mapping dataflow.-- *container* is your container name in the Azure Data Lake Gen2 as the destination store.-- *folder* is your folder name in the Azure Data Lake Gen2 as the destination store. -- *partition* is your column name used to create partitions for each unique value in such column to write data into Delta format on Azure Data Lake Gen2 via Spark cluster used by mapping dataflow. You can get more details [here](concepts-data-flow-performance.md#key)- - :::image type="content" source="media/solution-template-replicate-multiple-objects-sap-cdc/sap-cdc-template-control-file.png" alt-text="Screenshot of SAP CDC control file."::: - +An external control file in json format is required in this template. The schema for the control file is as below. +- *checkPointKey* is your custom key to manage the checkpoint of your changed data capture in ADF. You can get more details [here](concepts-change-data-capture.md#checkpoint). +- *sapContext* is your SAP ODP context from the source SAP system. You can get more details [here](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-the-source-dataset). +- *sapObjectName* is your SAP ODP object name to be loaded from the SAP system. You can get more details [here](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-the-source-dataset). +- *sapRunMode* is to determine how you want to load SAP object. It can be fullLoad, incrementalLoad or fullAndIncrementalLoad. +- *sapKeyColumns* are your key column names from SAP ODP objects used to do the dedupe in mapping dataflow. +- *sapPartitions* are list of partition conditions leading to separate extraction processes in the connected SAP system. +- *deltaContainer* is your container name in the Azure Data Lake Gen2 as the destination store. +- *deltaFolder* is your folder name in the Azure Data Lake Gen2 as the destination store. +- *deltaKeyColumns* are your columns used to determine if a row from the source matches a row from the sink when you want to update or delete a row. +- *deltaPartition* is your column used to create partitions for each unique value in such column to write data into Delta format on Azure Data Lake Gen2 via Spark cluster used by mapping dataflow. You can get more details [here](concepts-data-flow-performance.md#key) ++A sample control file is as below: +```json +[ + { + "checkPointKey":"cba2acf0-d5e2-4d84-a552-e0a059b6d320", + "sapContext": "ABAP_CDS", + "sapObjectName": "ZPERFCDPOS$F", + "sapRunMode": "fullAndIncrementalLoad", + "sapKeyColumns": [ + "TABKEY" + ], + "sapPartitions": [ + [{ + "fieldName": "TEXTCASE", + "sign": "I", + "option": "EQ", + "low": "1" + }, + { + "fieldName": "TEXTCASE", + "sign": "I", + "option": "EQ", + "low": "X" + }] + ], + "deltaContainer":"delta", + "deltaFolder":"ZPERFCDPOS", + "deltaKeyColumns":["TABKEY"], + "deltaPartition":"TEXTCASE", + "stagingStorageFolder":"stagingcontainer/stagingfolder" + }, + { + "checkPointKey":"fgaeca7f-d3d4-406f-bb48-a17faa83f76c", + "sapContext": "SAPI", + "sapObjectName": "Z0131", + "sapRunMode": "incrementalLoad", + "sapKeyColumns": [ + "ID" + ], + "sapPartitions": [], + "deltaContainer":"delta", + "deltaFolder":"Z0131", + "deltaKeyColumns":["ID"], + "deltaPartition":"COMPANY", + "stagingStorageFolder":"stagingcontainer/stagingfolder" + } +] +``` ## How to use this solution template -1. Create and upload a control file into CSV format to your Azure Data Lake Gen2 as the destination store. The default container to store the control file is **demo** and default control file name is **SAP2DeltaLookup.csv**. +1. Create and upload a control file into json format to your Azure Data Lake Gen2 as the destination store. The default container to store the control file is **demo** and default control file name is **SapToDeltaParameters.json**. - :::image type="content" source="media/solution-template-replicate-multiple-objects-sap-cdc/sap-cdc-template-control-file.png" alt-text="Screenshot of SAP CDC control file."::: 2. Go to the **Replicate multiple tables from SAP ODP to Azure Data Lake Storage Gen2 in Delta format** template and **click** it. |
data-factory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md | New top-level CDC resource - native CDC configuration in 3 simple steps [Learn m ### Orchestration -Orchestyrate Synapse notebooks and Synapse spark job definitions natively [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/orchestrate-and-operationalize-synapse-notebooks-and-spark-job/ba-p/3724379) +Orchestrate Synapse notebooks and Synapse spark job definitions natively [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/orchestrate-and-operationalize-synapse-notebooks-and-spark-job/ba-p/3724379) ### Region expansion |
defender-for-cloud | Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md | Title: User roles and permissions in Microsoft Defender for Cloud description: This article explains how Microsoft Defender for Cloud uses role-based access control to assign permissions to users and identify the permitted actions for each role. Previously updated : 01/24/2023 Last updated : 03/06/2023 # User roles and permissions -Microsoft Defender for Cloud uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to provide [built-in roles](../role-based-access-control/built-in-roles.md). You can assign these roles to users, groups, and services in Azure to give users access to resources according the access defined in the role. +Microsoft Defender for Cloud uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to provide [built-in roles](../role-based-access-control/built-in-roles.md). You can assign these roles to users, groups, and services in Azure to give users access to resources according to the access defined in the role. -Defender for Cloud assesses the configuration of your resources to identify security issues and vulnerabilities. In Defender for Cloud, you only see information related to a resource when you're assigned one of these roles for the subscription or for the resource group the resource is in: Owner, Contributor, or Reader +Defender for Cloud assesses the configuration of your resources to identify security issues and vulnerabilities. In Defender for Cloud, you only see information related to a resource when you're assigned one of these roles for the subscription or for the resource group the resource is in: Owner, Contributor, or Reader. In addition to the built-in roles, there are two roles specific to Defender for Cloud: The specific role required to deploy monitoring components depends on the extens ## Roles used to automatically provision agents and extensions -To allow the Security Admin role to automatically provision agents and extensions used in Defender for Cloud plans, Defender for Cloud uses policy remediation in a similar way to [Azure Policy](../governance/policy/how-to/remediate-resources.md). To use remediation, Defender for Cloud needs to create service principals, also called managed identities, that assigns roles at the subscription level. For example, the service principals for the Defender for Containers plan are: +To allow the Security Admin role to automatically provision agents and extensions used in Defender for Cloud plans, Defender for Cloud uses policy remediation in a similar way to [Azure Policy](../governance/policy/how-to/remediate-resources.md). To use remediation, Defender for Cloud needs to create service principals, also called managed identities that assign roles at the subscription level. For example, the service principals for the Defender for Containers plan are: | Service Principal | Roles | |:-|:-| |
defender-for-iot | How To Deploy Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md | This article describes how to create and deploy SSL/TLS certificates on OT netwo - Between an on-premises management console and a high availability (HA) server, if configured - Between OT sensors or on-premises management consoles and partners servers defined in [alert forwarding rules](how-to-forward-alert-information-to-partners.md) -Some organizations also validate their certificates against a Certificate Revocation List (CRL) and the certificate expiration date, and the certificate trust chain. Invalid certificates can't be uploaded to OT sensors or on-premises management consoles, and will block encrypted communication between Defender for IoT components. +You can deploy SSL/TLS certificates during initial configuration as well as later on. ++Defender for IoT validates certificates against the certificate expiration date and against a passphrase, if one is defined. Validations against a Certificate Revocation List (CRL) and the certificate trust chain are available as well, though not mandatory. Invalid certificates can't be uploaded to OT sensors or on-premises management consoles, and will block encrypted communication between Defender for IoT components. Each certificate authority (CA)-signed certificate must have both a `.key` file and a `.crt` file, which are uploaded to OT network sensors and on-premises management consoles after the first sign-in. While some organizations may also require a `.pem` file, a `.pem` file isn't required for Defender for IoT. Deploy your SSL/TLS certificate by importing it to your OT sensor or on-premises Verify that your SSL/TLS certificate [meets the required parameters](#verify-certificate-file-parameter-requirements), and that you have [access to a CRL server](#verify-crl-server-access). -### Import the SSL/TLS certificate +### Deploy a certificate on an OT sensor ++1. Sign into your OT sensor and select **System settings** > **Basic** > **SSL/TLS certificate**. ++1. In the **SSL/TLS certificate** pane, select one of the following, and then follow the instructions in the relevant tab: ++ - **Import a trusted CA certificate (recommended)** + - **Use Locally generated self-signed certificate (Not recommended)** -**To deploy a certificate on an OT sensor**: + # [Trusted CA certificates](#tab/import-trusted-ca-certificate) + + 1. Enter the following parameters: + + | Parameter | Description | + ||| + | **Certificate Name** | Enter your certificate name. | + | **Passphrase** - *Optional* | Enter a passphrase. | + | **Private Key (KEY file)** | Upload a Private Key (KEY file). | + | **Certificate (CRT file)** | Upload a Certificate (CRT file). | + | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). | + + Select **Use CRL (Certificate Revocation List) to check certificate status** to validate the certificate against a [CRL server](#verify-crl-server-access). The certificate is checked once during the import process. -1. Sign into your OT sensor and select **System settings** > **Basic** > **SSL/TLS certificate** + For example: -1. In the **SSL/TLS certificate** pane, enter your certificate name and passphrase, and then upload the files you'd created earlier. + :::image type="content" source="media/how-to-deploy-certificates/recommended-ssl.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/recommended-ssl.png"::: + + # [Locally generated self-signed certificates](#tab/locally-generated-self-signed-certificate) + + > [!NOTE] + > Using self-signed certificates in a production environment is not recommended, as it leads to a less secure environment. + > We recommend using self-signed certificates in test environments only. + > The owner of the certificate cannot be validated and the security of your system cannot be maintained. - Select **Enable certificate validation** to validate the certificate against a [CRL server](#verify-crl-server-access). + Select **Confirm** to acknowledge the warning. ++ ++1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**. 1. Select **Save** to save your certificate settings. -**To deploy a certificate on an on-premises management console sensor**: +### Deploy a certificate on an on-premises management console -1. Sign into your OT sensor and select **System settings** > **SSL/TLS certificates**. +1. Sign into your on-premises management console and select **System settings** > **SSL/TLS certificates**. -1. In the **SSL/TLS Certificates** dialog, select **Add Certificate**. +1. In the **SSL/TLS certificate** pane, select one of the following, and then follow the instructions in the relevant tab: -1. In the **Import a trusted CA-signed certificate** area, enter a certificate name and optional passphrase, and then upload the files you'd created earlier. + - **Import a trusted CA certificate** + - **Use Locally generated self-signed certificate (Insecure, not recommended)** -1. Select the **Enable certificate validation** option to validate the certificate against a [CRL server](#verify-crl-server-access). + # [Trusted CA certificates](#tab/cm-import-trusted-ca-certificate) + + 1. In the **SSL/TLS Certificates** dialog, select **Add Certificate**. -1. Select **Save** to save your certificate settings. + 1. Enter the following parameters: + + | Parameter | Description | + ||| + | **Certificate Name** | Enter your certificate name. | + | **Passphrase** - *Optional* | Enter a passphrase. | + | **Private Key (KEY file)** | Upload a Private Key (KEY file). | + | **Certificate (CRT file)** | Upload a Certificate (CRT file). | + | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). | ++ For example: ++ :::image type="content" source="media/how-to-deploy-certificates/management-ssl-certificate.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/management-ssl-certificate.png"::: ++ # [Locally generated self-signed certificates](#tab/cm-locally-generated-self-signed-certificate) + + > [!NOTE] + > Using self-signed certificates in a production environment is not recommended, as it leads to a less secure environment. + > We recommend using self-signed certificates in test environments only. + > The owner of the certificate cannot be validated and the security of your system cannot be maintained. ++ Select **I CONFIRM** to acknowledge the warning. ++ ++1. Select the **Enable Certificate Validation** option to turn on system-wide validation for SSL/TLS certificates with the issuing [Certificate Authority](#create-ca-signed-ssltls-certificates) and [Certificate Revocation Lists](#verify-crl-server-access). ++1. Select **SAVE** to save your certificate settings. You can also [import the certificate to your OT sensor using CLI commands](references-work-with-defender-for-iot-cli-commands.md#tlsssl-certificate-commands). |
defender-for-iot | Iot Advanced Threat Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md | After youΓÇÖve [configured your Defender for IoT data to trigger new incidents i 1. Above the incident grid, select the **Product name** filter and clear the **Select all** option. Then, select **Microsoft Defender for IoT** to view only incidents triggered by Defender for IoT alerts. For example: - :::image type="content" source="media/iot-solution/filter-incidents-defender-for-iot.png" alt-text="Screenshot of filtering incidents by product name for Defender for IoT devices."::: + :::image type="content" source="media/iot-solution/filter-incidents-defender-for-iot.png" alt-text="Screenshot of filtering incidents by product name for Defender for IoT devices." lightbox="media/iot-solution/filter-incidents-defender-for-iot.png"::: 1. Select a specific incident to begin your investigation. - In the incident details pane on the right, view details such as incident severity, a summary of the entities involved, any mapped MITRE ATT&CK tactics or techniques, and more. + In the incident details pane on the right, view details such as incident severity, a summary of the entities involved, any mapped MITRE ATT&CK tactics or techniques, and more. For example: - :::image type="content" source="media/iot-solution/investigate-iot-incidents.png" alt-text="Screenshot of a Microsoft Defender for IoT incident in Microsoft Sentinel."::: + :::image type="content" source="media/iot-solution/investigate-iot-incidents.png" alt-text="Screenshot of a Microsoft Defender for IoT incident in Microsoft Sentinel."lightbox="media/iot-solution/investigate-iot-incidents.png"::: - > [!TIP] - > To investigate the incident in Defender for IoT, select the **Investigate in Microsoft Defender for IoT** link at the top of the incident details pane. +1. Select **View full details** to open the incident details page, where you can drill down even more. For example: -For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md). + - Understand the incident's business impact and physical location using details, like an IoT device's site, zone, sensor name, and device importance. -### Investigate further with IoT device entities --When investigating an incident in Microsoft Sentinel, in an incident details pane, select an IoT device entity from the **Entities** list to open its [device entity page]](/azure/sentinel/entity-pages). + - Learn about recommended remediation steps by selecting an alert in the incident timeline and viewing the **Remediation steps** area. -You can identify an IoT device by the IoT device icon: :::image type="icon" source="media/iot-solution/iot-device-icon.png" border="false"::: + - Select an IoT device entity from the **Entities** list to open its [device entity page](/azure/sentinel/entity-pages). For more information, see [Investigate further with IoT device entities](#investigate-further-with-iot-device-entities). -If you don't see your IoT device entity right away, select **View full details** under the entities listed to open the full incident page. In the **Entities** tab, select an IoT device to open its entity page. For example: +For more information, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md). - :::image type="content" source="media/iot-solution/incident-full-details-iot-device.png" alt-text="Screenshot of a full detail incident page."::: +> [!TIP] +> To investigate the incident in Defender for IoT, select the **Investigate in Microsoft Defender for IoT** link at the top of the incident details pane on the **Incidents** page. -The IoT device entity page provides contextual device information, with basic device details and device owner contact information. The device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example: +### Investigate further with IoT device entities +When you are investigating an incident in Microsoft Sentinel and have the incident details pane open on the right, select an IoT device entity from the **Entities** list to view more details about the selected entity. Identify an *IoT device* by the IoT device icon: :::image type="icon" source="media/iot-solution/iot-device-icon.png" border="false"::: -For more information on entity pages, see [Investigate entities with entity pages in Microsoft Sentinel](../../sentinel/entity-pages.md). +If you don't see your IoT device entity right away, select **View full details** to open the full incident page, and then check the **Entities** tab. Select an IoT device entity to view more entity data, like basic device details, owner contact information, and a timeline of events that occurred on the device. -You can also hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name: +To drill down even further, select the IoT device entity link and open the device entity details page, or hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name: :::image type="content" source="media/iot-solution/entity-behavior-iot-devices-alerts.png" alt-text="Screenshot of IoT devices by number of alerts on entity behavior page."::: -For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md). +For more information, see [Investigate entities with entity pages in Microsoft Sentinel](../../sentinel/entity-pages.md) and [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md). ### Investigate the alert in Defender for IoT |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Features released earlier than nine months ago are described in the [What's new > Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > +## March 2023 ++|Service area |Updates | +||| +| **OT networks** | **Cloud features**: - [New Microsoft Sentinel incident experience for Defender for IoT](#new-microsoft-sentinel-incident-experience-for-defender-for-iot) | ++### New Microsoft Sentinel incident experience for Defender for IoT ++Microsoft Sentinel's new [incident experience](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/the-new-incident-experience-is-here/ba-p/3717042) includes specific features for Defender for IoT customers. When investigating OT/IoT-related incidents, SOC analysts can now use the following enhancements on incident details pages: ++- **View related sites, zones, sensors, and device importance** to better understand an incident's business impact and physical location. ++- **Review an aggregated timeline of affected devices and related device details**, instead of investigating on separate entity details pages for the related devices ++- **Review OT alert remediation steps** directly on the incident details page ++For more information, see [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) and [Navigate and investigate incidents in Microsoft Sentinel](/azure/sentinel/investigate-incidents). + ## February 2023 |Service area |Updates | Features released earlier than nine months ago are described in the [What's new | **OT networks** | **Cloud features**: <br>- [Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2](#microsoft-sentinel-microsoft-defender-for-iot-solution-version-202) <br>- [Download updates from the Sites and sensors page (Public preview)](#download-updates-from-the-sites-and-sensors-page-public-preview) <br>- [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br>- [Device inventory GA in the Azure portal](#device-inventory-ga-in-the-azure-portal) <br>- [Device inventory grouping enhancements (Public preview)](#device-inventory-grouping-enhancements-public-preview) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) | | **Enterprise IoT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) | + ### Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2 [Version 2.0.2](release-notes-sentinel.md#version-202) of the Microsoft Defender for IoT solution is now available in the [Microsoft Sentinel content hub](/azure/sentinel/sentinel-solutions-catalog), with improvements in analytics rules for incident creation, an enhanced incident details page, and performance improvements for analytics rule queries. For more information, see [Define and view OT sensor settings from the Azure por ### Alerts GA in the Azure portal -The **Alerts** page in the Azure portal is now out for General Availability. Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events detected in your network. Alerts are triggered when OT or Enterprise IoT network sensors, or the [Defender for IoT micro agent](../device-builders/index.yml), detect changes or suspicious activity in network traffic that need your attention. +The **Alerts** page in the Azure portal is now out for General Availability. Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events detected in your network. Alerts are triggered when OT or Enterprise IoT network sensors, or the [Defender for IoT micro agent](../device-builders/index.yml), detect changes or suspicious activity in network traffic that needs your attention. Specific alerts triggered by the Enterprise IoT sensor currently remain in public preview. For more information, see: ||| |**OT networks** |**Sensor version 22.3.4**: [Azure connectivity status shown on OT sensors](#azure-connectivity-status-shown-on-ot-sensors)<br><br>**Sensor version 22.2.3**: [Update sensor software from the Azure portal](#update-sensor-software-from-the-azure-portal-public-preview) | -- ### Update sensor software from the Azure portal (Public preview) For cloud-connected sensor versions [22.2.3](release-notes.md#2223) and higher, now you can update your sensor software directly from the new **Sites and sensors** page on the Azure portal. The following Defender for IoT options and configurations have been moved, remov ## Next steps -[Getting started with Defender for IoT](getting-started.md) +[Getting started with Defender for IoT](getting-started.md) |
dev-box | Quickstart Configure Dev Box Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md | Title: Configure the Microsoft Dev Box Preview service -description: "This quickstart shows you how to configure the Microsoft Dev Box Preview service to provide dev boxes for your users. You'll create a dev center, add a network connection, and then create a dev box definition, and a project." +description: This quickstart shows you how to configure the Microsoft Dev Box Preview service to provide dev boxes for your users. You'll create a dev center, add a network connection, and then create a dev box definition and a project. +Customer intent: As an enterprise admin, I want to understand how to create and configure dev box components so that I can provide dev box projects for my users. -<!-- - Customer intent: - As an enterprise admin I want to understand how to create and configure dev box components so that I can provide dev box projects my users. - --> # Quickstart: Configure the Microsoft Dev Box Preview service -This quickstart describes how to configure the Microsoft Dev Box service by using the Azure portal to enable development teams to self-serve dev boxes. +This quickstart describes how to configure your instance of the Microsoft Dev Box Preview service so that development teams can create their own dev boxes in your deployment. -This quickstart will take you through the process of setting up your Dev Box environment. You'll create a dev center to organize your dev box resources, configure network components to enable dev boxes to connect to your organizational resources, and create a dev box definition that will form the basis of your dev boxes. YouΓÇÖll then create a project and a dev box pool, which work together to help you give access to users who will manage or use the dev boxes. +In the quickstart, you go through the process of setting up your Microsoft Dev Box environment by using the Azure portal. You create a dev center to organize your Dev Box resources, you configure network components so that dev boxes can connect to your organization's resources, and you create a dev box definition that is the basis of your dev boxes. Then, you create a project and a dev box pool to form a framework you can use to give access to users to manage or use dev boxes. -After you've completed this quickstart, you'll have a Dev Box configuration ready for users to create and connect to dev boxes. +When you finish this quickstart, you'll have a dev box configuration in which users you give access to can create dev boxes and connect to dev boxes in the dev box pool. ## Prerequisites -To complete this quick start, make sure that you have: +To complete this quickstart, make sure that you have: + - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Owner or Contributor permissions on an Azure Subscription or a specific resource group.-- Network Contributor permissions on an existing virtual network (owner or contributor) or permission to create a new virtual network and subnet.-- User licenses. To use Dev Box, each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Intune, and Azure Active Directory P1. - - These licenses are available independently and also included in the following subscriptions: - - Microsoft 365 F3 - - Microsoft 365 E3, Microsoft 365 E5 - - Microsoft 365 A3, Microsoft 365 A5 - - Microsoft 365 Business Premium - - Microsoft 365 Education Student Use Benefit -- [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) allows you to use your Windows licenses on Azure with Dev Box.+- Owner or Contributor role on an Azure subscription or for the resource group you'll use to hold your dev box resources. +- Network Contributor permissions on an existing virtual network (owner or contributor) or permissions to create a new virtual network and subnet. +- User licenses. To use Dev Box, each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Intune, and Azure Active Directory P1. These licenses are available independently and also are included in the following subscriptions: + - Microsoft 365 F3 + - Microsoft 365 E3, Microsoft 365 E5 + - Microsoft 365 A3, Microsoft 365 A5 + - Microsoft 365 Business Premium + - Microsoft 365 Education Student Use Benefit +- [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/), which allows you to use your Windows licenses on Azure with Dev Box. ## Create a dev center -To begin the configuration, you'll create a dev center to enable you to manage your dev box resources. The following steps show you how to create and configure a dev center. +To begin the configuration, create a dev center you can use to manage your Dev Box resources. The following steps show you how to create and configure a dev center. 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In the search box, type *Dev centers* and then select **Dev centers** in the search results. +1. In the search box, enter **dev centers**. In the search results, select **Dev centers**. - :::image type="content" source="./media/quickstart-configure-dev-box-service/discover-dev-centers.png" alt-text="Screenshot showing the Azure portal with the search box and dev centers result highlighted."::: + :::image type="content" source="./media/quickstart-configure-dev-box-service/discover-dev-centers.png" alt-text="Screenshot that shows the Azure portal with the search box and the Dev centers result highlighted."::: -1. On the dev centers page, select **+Create**. +1. On the **Dev centers** page, select **Create**. - :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center.png" alt-text="Screenshot showing the Azure portal Dev center with create highlighted."::: + :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center.png" alt-text="Screenshot that shows the Azure portal Dev centers with Create highlighted."::: -1. On the **Create a dev center** page, on the **Basics** tab, enter the following values: +1. On the **Create a dev center** pane, on the **Basics** tab, enter the following values: |Name|Value| |-|-|- |**Subscription**|Select the subscription in which you want to create the dev center.| - |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.| + |**Subscription**|Select the subscription in which to create the dev center.| + |**Resource group**|Select an existing resource group, or select **Create new** and enter a name for the resource group.| |**Name**|Enter a name for your dev center.|- |**Location**|Select the location/region you want the dev center to be created in.| - - :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center-basics.png" alt-text="Screenshot showing the Create dev center Basics tab."::: - - The currently supported Azure locations with capacity are listed here: [Microsoft Dev Box Preview](https://aka.ms/devbox_acom). + |**Location**|Select the location or region to create the dev center in.| ++ :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center-basics.png" alt-text="Screenshot that shows the Create a dev center Basics tab."::: ++ Currently supported Azure locations with capacity are listed in [Frequently asked questions about Microsoft Dev Box](https://aka.ms/devbox_acom). -1. [Optional] On the **Tags** tab, enter a name and value pair that you want to assign. - :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center-tags.png" alt-text="Screenshot showing the Create dev center Tags tab."::: +1. \[Optional\] On the **Tags** tab, enter a name and value pair that you want to assign. ++ :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center-tags.png" alt-text="Screenshot that shows the Create a dev center Tags tab."::: 1. Select **Review + Create**. 1. On the **Review** tab, select **Create**. -1. You can check on the progress of the dev center creation from any page in the Azure portal by opening the notifications pane. -- :::image type="content" source="./media/quickstart-configure-dev-box-service/notifications-pane.png" alt-text="Screenshot showing Azure portal notifications pane."::: + To check the progress of the dev center deployment, select your notifications on any page in the Azure portal. -1. When the deployment is complete, select **Go to resource**. You'll see the dev center page. + :::image type="content" source="./media/quickstart-configure-dev-box-service/notifications-pane.png" alt-text="Screenshot that shows the Azure portal notifications pane."::: +1. When the deployment is complete, select **Go to resource** to go to the dev center overview. ## Create a network connection -Network connections determine the region into which dev boxes are deployed and allow them to be connected to your existing virtual networks. The following steps show you how to create and configure a network connection in Microsoft Dev Box. +A network connection determines the region where a dev box is deployed, and it allows a dev box to be connected to your existing virtual networks. The following steps show you how to create and configure a network connection in Microsoft Dev Box. To create a network connection, you must have: -- An existing virtual network (vnet) and subnet. If you don't have a vnet and subnet available, follow the instructions here: [Create a virtual network and subnet](#create-a-virtual-network-and-subnet) to create them.-- A configured and working Hybrid AD join or Azure AD join.- - **Azure AD join:** To learn how to join devices directly to Azure Active Directory (Azure AD), see [Plan your Azure Active Directory join deployment](../active-directory/devices/azureadjoin-plan.md). - - **Hybrid AD join:** To learn how to join your AD DS domain-joined computers to Azure AD from an on-premises Active Directory Domain Services (AD DS) environment, see [Plan your hybrid Azure Active Directory join deployment](../active-directory/devices/hybrid-azuread-join-plan.md). -- If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network).+- An existing virtual network and subnet. If you don't have a virtual network and subnet available, complete the instructions in [Create a virtual network and subnet](#create-a-virtual-network-and-subnet) to create them. +- A configured and working Hybrid Azure Active Directory (Azure AD) join or Azure AD join deployment. + - **Azure AD join:** To learn how to join devices directly to Azure AD, see [Plan your Azure Active Directory join deployment](../active-directory/devices/azureadjoin-plan.md). + - **Hybrid Azure AD join:** To learn how to join your Active Directory Domain Services (AD DS) domain-joined computers to Azure AD from an on-premises AD DS environment, see [Plan your hybrid Azure Active Directory join deployment](../active-directory/devices/hybrid-azuread-join-plan.md). +- If your organization routes egress traffic through a firewall, you must open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). ### Create a virtual network and subnet -You must have a vnet and subnet available for your network connection; create them using these steps: +You must have a virtual network and a subnet available for your network connection. You can create them by completing these steps: -1. In the search box, type *Virtual Network* and then select **Virtual Network** in the search results. +1. In the portal search box, enter **virtual network**. In the search results, select **Virtual Network**. 1. On the **Virtual Network** page, select **Create**. -1. In **Create virtual network**, enter or select this information in the **Basics** tab: -- :::image type="content" source="./media/quickstart-configure-dev-box-service/vnet-basics-tab.png" alt-text="Screenshot of creating a virtual network in Azure portal."::: +1. On the **Create virtual network** pane, enter or select this information on the **Basics** tab: | Setting | Value | | - | -- |- | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select an existing resource group or select **Create new**, and enter a name for the resource group. | - | **Instance details** | | - | Name | Enter a name for your vnet. | - | Region | Enter the location/region you want the vnet to be created in. | + | **Subscription** | Select your subscription. | + | **Resource group** | Select an existing resource group, or select **Create new** and enter a name for the resource group. | + | **Name** | Enter a name for your virtual network. | + | **Region** | Enter the location or region you want the virtual network to be created in. | ++ :::image type="content" source="./media/quickstart-configure-dev-box-service/vnet-basics-tab.png" alt-text="Screenshot that shows creating a virtual network in Azure portal."::: 1. On the **IP Addresses** tab, note the default IP address assignment and subnet. You can accept the defaults unless they conflict with your existing configuration. -1. Select the **Review + create** tab. Review the vnet and subnet configuration. +1. Select the **Review + create** tab. Review the virtual network and subnet configuration. 1. Select **Create**. ### Create a network connection -Now that you have an available vnet and subnet, you need a network connection to associate the vnet and subnet with the dev center. Follow these steps to create a network connection: +Now that you have a virtual network and subnet, you need a network connection to associate the virtual network and subnet with the dev center. -1. In the search box, type *Network connections* and then select **Network connections** in the search results. +To create a network connection, complete the steps on the relevant tab: -1. On the **Network Connections** page, select **+Create**. - :::image type="content" source="./media/quickstart-configure-dev-box-service/create-network-connection.png" alt-text="Screenshot showing the Network Connections page with Create highlighted."::: +#### [Azure AD join](#tab/AzureADJoin/) -1. Follow the steps on the appropriate tab to create your network connection. - #### [Azure AD join](#tab/AzureADJoin/) +1. In the portal search box, enter **network connections**. In the search results, select **Network connections**. - On the **Create a network connection** page, on the **Basics** tab, enter the following values: +1. On the **Network Connections** page, select **Create**. - |Name|Value| + :::image type="content" source="./media/quickstart-configure-dev-box-service/create-network-connection.png" alt-text="Screenshot that shows the Network Connections page with Create highlighted."::: ++1. On the **Create a network connection** pane, on the **Basics** tab, select or enter the following values: ++ Name|Value| |-|-| |**Domain join type**|Select **Azure active directory join**.|- |**Subscription**|Select the subscription in which you want to create the network connection.| - |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.| + |**Subscription**|Select the subscription in which to create the network connection.| + |**Resource group**|Select an existing resource group, or select **Create new** and enter a name for the resource group.| |**Name**|Enter a descriptive name for your network connection.| |**Virtual network**|Select the virtual network you want the network connection to use.| |**Subnet**|Select the subnet you want the network connection to use.| - :::image type="content" source="./media/quickstart-configure-dev-box-service/create-nc-native-join.png" alt-text="Screenshot showing the create network connection basics tab with Azure Active Directory join highlighted."::: + :::image type="content" source="./media/quickstart-configure-dev-box-service/create-nc-native-join.png" alt-text="Screenshot that shows the Create a network connection Basics tab with Azure Active Directory join highlighted."::: ++1. Select **Review + Create**. ++1. On the **Review** tab, select **Create**. ++1. When the deployment is complete, select **Go to resource**. The network connection appears on the **Network Connections** page. ++#### [Hybrid Azure AD join](#tab/HybridAzureADJoin/) ++1. In the portal search box, enter **network connections**. In the search results, select **Network connections**. ++1. On the **Network Connections** page, select **Create**. - #### [Hybrid Azure AD join](#tab/HybridAzureADJoin/) + :::image type="content" source="./media/quickstart-configure-dev-box-service/create-network-connection.png" alt-text="Screenshot that shows the Network Connections page with Create highlighted."::: - On the **Create a network connection** page, on the **Basics** tab, enter the following values: +1. On the **Create a network connection** pane, on the **Basics** tab, select or enter the following values: |Name|Value| |-|-| |**Domain join type**|Select **Hybrid Azure active directory join**.|- |**Subscription**|Select the subscription in which you want to create the network connection.| - |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.| + |**Subscription**|Select the subscription in which to create the network connection.| + |**Resource group**|Select an existing resource group, or select **Create new** and enter a name for the resource group.| |**Name**|Enter a descriptive name for your network connection.| |**Virtual network**|Select the virtual network you want the network connection to use.| |**Subnet**|Select the subnet you want the network connection to use.|- |**AD DNS domain name**| The DNS name of the Active Directory domain that you want to use for connecting and provisioning Cloud PCs. For example, corp.contoso.com. | - |**Organizational unit**| An organizational unit (OU) is a container within an Active Directory domain, which can hold users, groups, and computers. | - |**AD username UPN**| The username, in user principal name (UPN) format, that you want to use for connecting the Cloud PCs to your Active Directory domain. For example, svcDomainJoin@corp.contoso.com. This service account must have permission to join computers to the domain and, if set, the target OU. | - |**AD domain password**| The password for the user. | + |**AD DNS domain name**| Enter the DNS name of the Active Directory domain to use for connecting and provisioning cloud PCs. For example, `corp.contoso.com`. | + |**Organizational unit**| \[Optional\] Enter an OU. An organizational unit (OU) is a container within an Active Directory domain that can hold users, groups, and computers. | + |**AD username UPN**| Enter the username, in user principal name (UPN) format, that you want to use to connect the cloud PCs to your Active Directory domain. For example, `svcDomainJoin@corp.contoso.com`. This service account must have permissions to join computers to the domain and the target OU, if set. | + |**AD domain password**| Enter the password for the user. | - :::image type="content" source="./media/quickstart-configure-dev-box-service/create-nc-hybrid-join.png" alt-text="Screenshot showing the create network connection basics tab with Hybrid Azure Active Directory join highlighted."::: -- + :::image type="content" source="./media/quickstart-configure-dev-box-service/create-nc-hybrid-join.png" alt-text="Screenshot that shows the Create a network connection Basics tab with Hybrid Azure Active Directory join highlighted."::: 1. Select **Review + Create**. 1. On the **Review** tab, select **Create**. -1. When the deployment is complete, select **Go to resource**. You'll see the Network Connection overview page. +1. When the deployment is complete, select **Go to resource**. The network connection appears on the **Network Connections** page. ++++## Attach a network connection to a dev center ++To provide networking configuration information for dev boxes, associate a network connection with a dev center. -## Attach network connection to dev center +1. In the portal search box, enter **dev centers**. In the search results, select **Dev centers**. -To provide networking configuration information for dev boxes, you need to associate a network connection with a dev center. In this step, you'll attach the network connection to your dev center. +1. Select the dev center you created, and then select **Networking**. -1. In the search box, type *Dev centers* and then select **Dev centers** in the search results. +1. Select **Add**. -1. Select the dev center you created and select **Networking**. - -1. Select **+ Add**. - -1. In the **Add network connection** pane, select the network connection you created earlier, and then select **Add**. +1. On the **Add network connection** pane, select the network connection you created earlier, and then select **Add**. -After creation, several health checks are run on the network. You can view the status of the checks on the resource overview page. Network connections that pass all the health checks can be added to a dev center and used in the creation of dev box pools. The dev boxes within the dev box pools will be created and domain joined in the location of the vnet assigned to the network connection. +After the network connection is attached to the dev center, several health checks are run on the network connection. You can view the status of the checks on the resource overview page. You can add network connections that pass all health checks to a dev center and use them to create dev box pools. Dev boxes that are in dev box pools are created and domain joined in the location of the virtual network that's assigned to the network connection. -To resolve any errors, refer to the [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection). +To resolve any errors, see [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection). ## Create a dev box definition -The following steps show you how to create and configure a dev box definition. Dev box definitions define the image and SKU (compute + storage) that will be used in creation of the dev boxes. +The following steps show you how to create and configure a dev box definition. A dev box definition defines the image and SKU (compute + storage) that you use when you create a dev box. 1. Open the dev center in which you want to create the dev box definition. 1. Select **Dev box definitions**. -1. On the **Dev box definitions** page, select **+Create**. +1. On the **Dev box definitions** page, select **Create**. -1. On the **Create dev box definition** page, enter the following values: -- Enter the following values: +1. On the **Create dev box definition** pane, select or enter the following values: |Name|Value|Note| |-|-|-| |**Name**|Enter a descriptive name for your dev box definition.|- |**Image**|Select the base operating system for the dev box. You can select an image from the Azure Marketplace or from an Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. |To use custom images while creating a dev box definition, you can attach an Azure Compute Gallery that has the custom images. Learn [How to configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).| - |**Image version**|Select a specific, numbered version to ensure all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure new dev boxes use the latest image available.|Selecting the Latest image version enables the dev box pool to use the most recent image version for your chosen image from the gallery. This way, the dev boxes created will stay up to date with the latest tools and code on your image. Existing dev boxes won't be modified when an image version is updated.| + |**Image**|Select the base operating system for the dev box. You can select an image from Azure Marketplace or from an instance of the Azure Compute Gallery service. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. |To use custom images when you create a dev box definition, you can attach an instance of Compute Gallery that has the custom images. Learn [how to configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).| + |**Image version**|Select a specific, numbered version to ensure that all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure that new dev boxes use the latest image available.|When you select **Latest** for **Image version**, the dev box pool can use the most recent version of the image you choose in the gallery. This way, the dev boxes stay up to date with the latest tools and code for your image. Existing dev boxes aren't modified when an image version is updated.| |**Compute**|Select the compute combination for your dev box definition.|| |**Storage**|Select the amount of storage for your dev box definition.|| - :::image type="content" source="./media/quickstart-configure-dev-box-service/recommended-test-image.png" alt-text="Screenshot showing the Create dev box definition page."::: + :::image type="content" source="./media/quickstart-configure-dev-box-service/recommended-test-image.png" alt-text="Screenshot that shows the Create dev box definition page."::: 1. Select **Create**. ## Create a project -Dev box projects enable you to manage team level settings, including providing access to development teams so developers can create dev boxes. - -The following steps show you how to create and configure a project in dev box. +You can use dev box projects to manage team-level settings, including providing access to development teams so that developers can create dev boxes. ++The following steps show you how to create and configure a project in Microsoft Dev Box. -1. In the search box, type *Projects* and then select **Projects** in the search results. +1. In the portal search box, enter **projects**. In the search results, select **Projects**. -1. On the Projects page, select **+Create**. - -1. On the **Create a project** page, on the **Basics** tab, enter the following values: +1. On the **Projects** page, select **Create**. ++1. On the **Create a project** pane, on the **Basics** tab, select or enter the following values: |Name|Value| |-|-|- |**Subscription**|Select the subscription in which you want to create the project.| - |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.| - |**Dev center**|Select the dev center to which you want to associate this project. All the dev center level settings will be applied to the project.| + |**Subscription**|Select the subscription in which to create the project.| + |**Resource group**|Select an existing resource group, or select **Create new** and enter a name for the resource group.| + |**Dev center**|Select the dev center to associate with this project. All the dev center-level settings are applied to the project.| |**Name**|Enter a name for your project. | |**Description**|Enter a brief description of the project. | - :::image type="content" source="./media/quickstart-configure-dev-box-service/dev-box-project-create.png" alt-text="Screenshot of the Create a dev box project basics tab."::: + :::image type="content" source="./media/quickstart-configure-dev-box-service/dev-box-project-create.png" alt-text="Screenshot that shows the Create a dev box project Basics tab."::: -1. [Optional] On the **Tags** tab, enter a name and value pair that you want to assign. +1. \[Optional\] On the **Tags** tab, enter a name and value pair that you want to assign. 1. Select **Review + Create**. The following steps show you how to create and configure a project in dev box. 1. Confirm that the project is created successfully by checking the notifications. Select **Go to resource**. -1. Verify that you see the **Project** page. -+1. Verify that you see the project on the **Projects** page. ## Create a dev box pool -A dev box pool is a collection of dev boxes that have similar settings. Dev box pools specify the dev box definitions and network connections dev boxes will use. You must have at least one pool associated with your project before users can create a dev box. +A dev box pool is a collection of dev boxes that have similar settings. Dev box pools specify the dev box definitions and the network connections dev boxes will use. You must have at least one pool associated with your project before a user can create a dev box. -The following steps show you how to create a dev box pool associated with a project. +The following steps show you how to create a dev box pool that's associated with a project. -1. In the search box, type *Projects* and then select **Projects** in the search results. +1. In the portal search box, enter **projects**. In the search results, select **Projects**. 1. Open the project in which you want to create the dev box pool. - :::image type="content" source="./media/quickstart-configure-dev-box-service/select-project.png" alt-text="Screenshot of the list of existing projects."::: + :::image type="content" source="./media/quickstart-configure-dev-box-service/select-project.png" alt-text="Screenshot that shows the list of existing projects."::: -1. Select **Dev box pools** and then select **+ Create**. - - :::image type="content" source="./media/quickstart-configure-dev-box-service/create-pool.png" alt-text="Screenshot of the list of dev box pools within a project. The list is empty."::: +1. On the left menu under **Manage**, select **Dev box pools**, and then select **Create**. -1. On the **Create a dev box pool** page, enter the following values: + :::image type="content" source="./media/quickstart-configure-dev-box-service/create-pool.png" alt-text="Screenshot that shows the list of dev box pools in a project. The list is empty."::: ++1. On the **Create a dev box pool** pane, select or enter the following values: |Name|Value| |-|-|- |**Name**|Enter a name for the pool. The pool name is visible to developers to select when they're creating dev boxes, and must be unique within a project.| - |**Dev box definition**|Select an existing dev box definition. The definition determines the base image and size for the dev boxes created within this pool.| - |**Network connection**|Select an existing network connection. The network connection determines the region of the dev boxes created within this pool.| - |**Dev Box Creator Privileges**|Select Local Administrator or Standard User.| - |**Enable Auto-stop**|Yes is the default. Select No to disable an Auto-stop schedule. You can configure an Auto-stop schedule after the pool has been created.| - |**Stop time**| Select a time to shutdown all the dev boxes in the pool. All Dev Boxes in this pool will be shut down at this time, everyday.| + |**Name**|Enter a name for the pool. The pool name is visible to developers to select when they're creating dev boxes. The pool name must be unique within a project.| + |**Dev box definition**|Select an existing dev box definition. The definition determines the base image and size for the dev boxes that are created in this pool.| + |**Network connection**|Select an existing network connection. The network connection determines the region of the dev boxes that are created in this pool.| + |**Dev box Creator Privileges**|Select Local Administrator or Standard User.| + |**Enable Auto-stop**|**Yes** is the default. Select **No** to disable an Auto-stop schedule. You can configure an Auto-stop schedule after the pool is created.| + |**Stop time**| Select a time to shut down all the dev boxes in the pool. All dev boxes in this pool will be shut down at this time every day.| |**Time zone**| Select the time zone that the stop time is in.| |**Licensing**| Select this check box to confirm that your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. | -- :::image type="content" source="./media/quickstart-configure-dev-box-service/create-pool-details.png" alt-text="Screenshot of the Create dev box pool dialog."::: + :::image type="content" source="./media/quickstart-configure-dev-box-service/create-pool-details.png" alt-text="Screenshot that shows the Create a dev box pool pane."::: 1. Select **Create**.- -1. Verify that the new dev box pool appears in the list. You may need to refresh the screen. -The dev box pool will be deployed and health checks will be run to ensure the image and network pass the validation criteria to be used for dev boxes. The following screenshot shows four dev box pools, each with a different status. +1. Verify that the new dev box pool appears in the list. You might need to refresh the page to see the dev box pool. - :::image type="content" source="./media/quickstart-configure-dev-box-service/dev-box-pool-grid-populated.png" alt-text="Screenshot showing a list of existing pools."::: + The dev box pool is deployed and health checks are run to ensure that the image and network pass the validation criteria to be used for dev boxes. The following screenshot shows four dev box pools, each with a different status. ++ :::image type="content" source="./media/quickstart-configure-dev-box-service/dev-box-pool-grid-populated.png" alt-text="Screenshot that shows a list of existing pools with four different status messages."::: ## Provide access to a dev box project -Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it. +Before a user can create a dev box that's based in the dev box pools in a project, you must give the user access through a role assignment. The Dev Box User role gives a dev box user the permissions to create, manage, and delete their own dev boxes. You must have sufficient permissions to a project before you can assign the role to a user. ++1. In the portal search box, enter **projects**. In the search results, select **Projects**. -1. In the search box, type *Projects* and then select **Projects** in the search results. +1. Select the project you want to give team members access to. -1. Select the project you want to provide your team members access to. - - :::image type="content" source="./media/quickstart-configure-dev-box-service/select-project.png" alt-text="Screenshot of the list of existing projects."::: + :::image type="content" source="./media/quickstart-configure-dev-box-service/select-project.png" alt-text="Screenshot that shows a list of existing projects."::: -1. Select **Access Control (IAM)** from the left menu. +1. On the left menu, select **Access control (IAM)**. :::image type="content" source="./media/quickstart-configure-dev-box-service/project-permissions.png" alt-text="Screenshot showing the Project Access control page with the Access Control link highlighted.":::- -1. Select **Add** > **Add role assignment**. ++1. On the command bar, select **Add** > **Add role assignment**. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).- + | Setting | Value | | | | | **Role** | Select **DevCenter Dev Box User**. | Before users can create dev boxes based on the dev box pools in a project, you m | **Members** | Select the users or groups you want to have access to the project. | :::image type="content" source="media/how-to-dev-box-user/add-role-assignment-user.png" alt-text="Screenshot that shows the Add role assignment pane.":::- - The user will now be able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal). ++ The user will now be able to view the project and all the pools in it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal). [!INCLUDE [supported accounts note](./includes/note-supported-accounts.md)] -## Project admins +## Assign project admin role -The Microsoft Dev Box service makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their team, like creating and managing dev box pools. To provide users permissions to manage projects, add them to the DevCenter Project Admin role. +Through the Microsoft Dev Box service, you can delegate project administration to a member of the project team. Project admins can assist with the day-to-day management of projects for their team, including creating and managing dev box pools. To give a user permissions to manage projects, assign the DevCenter Project Admin role to the user. -You can assign the DevCenter Project Admin role by using the [Provide access to a dev box project](#provide-access-to-a-dev-box-project) steps, but selecting the Project Admin role instead of the Dev Box User role. For more information, go to [Provide access to projects for project admins](how-to-project-admin.md). +You can assign the DevCenter Project Admin role by completing the steps in [Provide access to a dev box project](#provide-access-to-a-dev-box-project), but select the Project Admin role instead of the Dev Box User role. For more information, see [Provide access to projects for project admins](how-to-project-admin.md). [!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)]+ ## Next steps -In this quickstart, you created a dev box project and the resources necessary to support it. You created a dev center, added a network connection, created a dev box definition, and a project. You then created a dev box pool within an existing project and assigned a user permission to create dev boxes based on the new pool. +In this quickstart, you created a dev box project and the resources necessary to support it. You created a dev center, added a network connection, created a dev box definition, and created a project. Then, you created a dev box pool within an existing project and assigned user permissions to create dev boxes that are based in the new pool. -To learn about how to create and connect to a dev box, advance to the next quickstart: +To learn how to create and connect to a dev box, go to the next quickstart: > [!div class="nextstepaction"]-> [Create a dev box](./quickstart-create-dev-box.md) +> [Create a dev box](./quickstart-create-dev-box.md) |
dms | Known Issues Azure Postgresql Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md | When you try to perform an online migration from Amazon Web Service (AWS) Relati - The database name can't include a semicolon (;). - A captured table must have a primary key. If a table doesn't have a primary key, the result of DELETE and UPDATE record operations will be unpredictable. - Updating a primary key segment is ignored. Applying such an update will be identified by the target as an update that didn't update any rows. The result is a record written to the exceptions table.+- If your table has a **JSON** column, any DELETE or UPDATE operations on this table can lead to a failed migration. - Migration of multiple tables with the same name but a different case might cause unpredictable behavior and isn't supported. An example is the use of table1, TABLE1, and Table1. - Change processing of [CREATE | ALTER | DROP | TRUNCATE] table DDLs isn't supported. - In Database Migration Service, a single migration activity can only accommodate up to four databases. |
dms | Tutorial Sql Server To Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md | To complete this tutorial, you need to: > [!NOTE] > - Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.- > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service. - + +- Ensure both the Azure Database Migration Service IP address and the Azure SQL Managed Instance subnet can communicate with the blob container. + [!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)] [!INCLUDE [instance-create](../../includes/database-migration-service-instance-create.md)] |
event-grid | Event Schema Data Manager For Agriculture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-data-manager-for-agriculture.md | + + Title: Azure Data Manager for Agriculture +description: Describes the properties that are provided for Azure Data Manager for Agriculture events with Azure Event Grid. + Last updated : 03/02/2023+++# Azure Data Manager for Agriculture (Preview) as Event Grid source ++This article provides the properties and schema for Azure Data Manager for Agriculture (Preview) events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md) and [Cloud event schema](cloud-event-schema.md). ++## Available event types ++### Farm management related event types ++|Event Name | Description| +|:--:|:-:| +|Microsoft.AgFoodPlatform.PartyChanged|Published when a `Party` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.FarmChangedV2| Published when a `Farm` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.FieldChangedV2|Published when a `Field` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.SeasonChanged|Published when a `Season` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.SeasonalFieldChangedV2|Published when a `Seasonal Field` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.BoundaryChangedV2|Published when a `Boundary` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.CropChanged|Published when a `Crop` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.CropProductChanged|Published when a `Crop Product` is created /updated/deleted.| +|Microsoft.AgFoodPlatform.AttachmentChangedV2|Published when an `Attachment` is created/updated/deleted. +|Microsoft.AgFoodPlatform.ManagementZoneChangedV2|Published when a `Management Zone` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.ZoneChangedV2|Published when an `Zone` is created/updated/deleted.| ++### Satellite data related event types ++|Event Name | Description| +|:--:|:-:| +|Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChangedV2| Published when a satellite data ingestion job's status is changed, for example, job is created, has progressed or completed.| ++### Weather data related event types ++|Event Name | Description| +|:--:|:-:| +|Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChangedV2|Published when a weather data ingestion job's status is changed, for example, job is created, has progressed or completed.| +|Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChangedV2| Published when a weather data refresher job status is changed, for example, job is created, has progressed or completed.| ++### Farm activities data related event types ++|Event Name | Description| +|:--:|:-:| +|Microsoft.AgFoodPlatform.ApplicationDataChangedV2|Published when an `Application Data` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.HarvestDataChangedV2|Published when a `Harvesting Data` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.TillageDataChangedV2|Published when a `Tillage Data` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.PlantingDataChangedV2|Published when a `Planting Data` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChangedV2|Published when an image-processing rasterizes job's status is changed, for example, job is created, has progressed or completed.| +|Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChangedV2| Published when a farm operations data ingestion job's status is changed, for example, job is created, has progressed or completed.| ++### Sensor data related event types ++|Event Name | Description| +|:--:|:-:| +|Microsoft.AgFoodPlatform.SensorMappingChangedV2|Published when a `Sensor Mapping` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.SensorPartnerIntegrationChangedV2|Published when a `Sensor Partner Integration` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.DeviceDataModelChanged|Published when `Device Data Model` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.DeviceChanged|Published when a `Device` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.SensorDataModelChanged|Published when a `Sensor Data Model` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.SensorChanged|Published when a `Sensor` is created/updated/deleted.| ++### Insight and observations related event types ++|Event Name | Description| +|:--:|:-:| +|Microsoft.AgFoodPlatform.PrescriptionChangedV2|Published when a `Prescription` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.PrescriptionMapChangedV2|Published when a `Prescription Map` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.PlantTissueAnalysisChangedV2|Published when a `Plant Tissue Analysis` data is created/updated/deleted.| +|Microsoft.AgFoodPlatform.NutrientAnalysisChangedV2|Published when a `Nutrient Analysis` data is created/updated/deleted.| +|Microsoft.AgFoodPlatform.InsightChangedV2| Published when an `Insight` is created/updated/deleted.| +|Microsoft.AgFoodPlatform.InsightAttachmentChangedV2| Published when an `Insight Attachment` is created/updated/deleted.| ++### Model inference jobs related event types ++|Event Name | Description| +|:--:|:-:| +|Microsoft.AgFoodPlatform.BiomassModelJobStatusChangedV2|Published when a Biomass Model job's status is changed, for example, job is created, has progressed or completed.| +|Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChangedV2|Published when a Soil Moisture Model job's status is changed, for example, job is created, has progressed or completed.| +|Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChangedV2|Published when a Sensor Placement Model job's status is changed, for example, job is created, has progressed or completed.| ++## Example events ++# [Event Grid event schema](#tab/event-grid-event-schema) ++The following example show schema for **Microsoft.AgFoodPlatform.PartyChanged**: ++```JSON +[ + { + "data": { + "actionType": "Deleted", + "modifiedDateTime": "2022-10-17T18:43:37Z", + "eTag": "f700fdd7-0000-0700-0000-634da2550000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "<YOUR-PARTY-ID>", + "createdDateTime": "2022-10-17T18:43:30Z" + }, + "id": "23fad010-ec87-40d9-881b-1f2d3ba9600b", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/<YOUR-PARTY-ID>", + "eventType": "Microsoft.AgFoodPlatform.PartyChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-10-17T18:43:37.3306735Z" + } +] +``` ++# [Cloud event schema](#tab/cloud-event-schema) ++The following example show schema for **Microsoft.AgFoodPlatform.PartyChanged**: ++```JSON +[ + { + "data": { + "actionType": "Deleted", + "modifiedDateTime": "2022-10-17T18:43:37Z", + "eTag": "f700fdd7-0000-0700-0000-634da2550000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "<YOUR-PARTY-ID>", + "createdDateTime": "2022-10-17T18:43:30Z" + }, + "id": "23fad010-ec87-40d9-881b-1f2d3ba9600b", + "source": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/<YOUR-PARTY-ID>", + "type": "Microsoft.AgFoodPlatform.PartyChanged", + "specversion":"1.0", + "time": "2022-10-17T18:43:37.3306735Z" + } +] +``` ++++## Event properties ++# [Event Grid event schema](#tab/event-grid-event-schema) ++An event has the following top-level data: ++| Property | Type | Description | +|:--:|:-:|:-:| +| `topic` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. | +| `subject` | string | Publisher-defined path to the event subject. | +| `eventType` | string | One of the registered event types for this event source. | +| `eventTime` | string | The time the event is generated based on the provider's UTC time. | +| `id` | string | Unique identifier for the event. | +| `data` | object | App Configuration event data. | +| `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. | +| `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. | ++# [Cloud event schema](#tab/cloud-event-schema) ++An event has the following top-level data: ++| Property | Type | Description | +|:--:|:-:|:-:| +| `source` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. | +| `subject` | string | Publisher-defined path to the event subject. | +| `type` | string | One of the registered event types for this event source. | +| `time` | string | The time the event is generated based on the provider's UTC time. | +| `id` | string | Unique identifier for the event. | +| `data` | object | App Configuration event data. | +| `specversion` | string | CloudEvents schema specification version. | ++++The data object has the following common properties: ++### For resource change related event types ++|Property | Type| Description| +|:--:|:-:|:-:| +|id| String| Unique ID of resource.| +|actionType| String| Indicates the change, which triggered publishing of the event. Applicable values are created, updated, deleted.| +|properties| Object| It contains user defined keyΓÇôvalue pairs.| +|modifiedDateTime|String| Indicates the time at which the event was last modified.| +|createdDateTime| String| Indicates the time at which the resource was created.| +|status| String| Contains the user defined status of the object.| +|eTag| String| Implements optimistic concurrency.| +|description| string| Textual description of the resource.| +|name| string| Name to identify resource.| ++### For job status change related event types ++Property| Type| Description +|:--:|:-:|:-:| +|id|String| Unique ID of the job.| +|name| string| User-defined name of the job.| +|status|string|Various states a job can be in.| +|isCancellationRequested| boolean|Flag that gets set when job cancellation is requested.| +|description|string| Textual description of the job.| +|partyId|string| Party ID for which job was created.| +|message|string| Status message to capture more details of the job.| +|lastActionDateTime|date-time|Date-time when last action was taken on the job, sample format: yyyy-MM-ddTHH:mm:ssZ.| +|createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.| +|properties| Object| It contains user defined key-value pairs.| ++## Next steps ++* For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md). +* For more information about how to create an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md). |
frontdoor | How To Enable Private Link Storage Static Website | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-enable-private-link-storage-static-website.md | + + Title: 'Connect Azure Front Door Premium to a storage static website origin with Private Link' ++description: Learn how to connect your Azure Front Door Premium to a storage static website privately. ++++ Last updated : 03/03/2023++++# Connect Azure Front Door Premium to a storage static website with Private Link ++This article guides you through how to configure Azure Front Door Premium tier to connect to your storage static website privately using the Azure Private Link service. ++## Prerequisites ++* An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* Create a [Private Link](../private-link/create-private-link-service-portal.md) service for your origin web server. +* Storage static website is enabled on your storage account. Learn how to [enable static website](../storage/blobs/storage-blob-static-website-how-to.md?tabs=azure-portal). ++## Enable Private Link to a storage static website ++In this section, you map the Private Link service to a private endpoint created in Azure Front Door's private network. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Within your Azure Front Door Premium profile, under *Settings*, select **Origin groups**. ++1. Select the origin group that contains the storage static website origin you want to enable Private Link for. ++1. Select **+ Add an origin** to add a new storage static website origin or select a previously created storage static website origin from the list. ++ :::image type="content" source="./media/how-to-enable-private-link-storage-static-website/private-endpoint-storage-static-website-primary.png" alt-text="Screenshot of enabling private link to a storage static website primary."::: ++1. The following table has the information of what values to select in the respective fields while enabling private link with Azure Front Door. Select or enter the following settings to configure the storage static website you want Azure Front Door Premium to connect with privately. ++ | Setting | Value | + | - | -- | + | Name | Enter a name to identify this storage static website origin. | + | Origin Type | Storage (Static website) | + | Host name | Select the host from the dropdown that you want as an origin. | + | Origin host header | You can customize the host header of the origin or leave it as default. | + | HTTP port | 80 (default) | + | HTTPS port | 443 (default) | + | Priority | Different origin can have different priorities to provide primary, secondary, and backup origins. | + | Weight | 1000 (default). Assign weights to your different origin when you want to distribute traffic.| + | Region | Select the region that is the same or closest to your origin. | + | Target sub resource | The type of sub-resource for the resource selected previously that your private endpoint can access. You can select *web* or *web_secondary*. | + | Request message | Custom message to see while approving the Private Endpoint. | ++1. Then select **Add** to save your configuration. Then select **Update** to save your changes. ++## Approve private endpoint connection from storage account ++1. Go to the storage account that you want to connect to Azure Front Door Premium privately. Select **Networking** under *Settings*. ++1. In **Networking**, select **Private endpoint connections**. ++ :::image type="content" source="./media/how-to-enable-private-link-storage-static-website/storage-networking-settings.png" alt-text="Screenshot of private endpoint connection tab under storage account networking settings."::: ++1. Select the pending private endpoint request from Azure Front Door Premium then select **Approve**. ++ :::image type="content" source="./media/how-to-enable-private-link-storage-static-website/approve-private-endpoint-connection.png" alt-text="Screenshot of approving private endpoint connection from storage account."::: ++1. Once approved, you can see the private endpoint connection status is **Approved**. ++ :::image type="content" source="./media/how-to-enable-private-link-storage-static-website/approved-private-endpoint-connection.png" alt-text="Screenshot of approved private endpoint connection from storage account."::: ++## Create private endpoint connection to web_secondary ++When creating a private endpoint connection to the storage static website's secondary sub resource, you need to add a **-secondary** suffix to the origin host header. For example, if your origin host header is *contoso.z13.web.core.windows.net*, you need to change it to *contoso-secondary.z13.web.core.windows.net*. +++Once the origin has been added and the private endpoint connection has been approved, you can test your private link connection to your storage static website. ++## Next steps ++Learn about [Private Link service with storage account](../storage/common/storage-private-endpoints.md). |
governance | Machine Configuration Azure Automation Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-azure-automation-migration.md | Title: Azure Automation State Configuration to machine configuration migration planning description: This article provides process and technical guidance for customers interested in moving from DSC version 2 in Azure Automation to version 3 in Azure Policy. Previously updated : 07/26/2022 Last updated : 03/06/2023 Group where the Automation Account is deployed. Install the PowerShell module "Az.Automation". ```powershell-Install-Module Az.Automation +Install-Module -Name Az.Automation ``` -Next, use the "Get-AzAutomationAccount" command to identify your Automation +Next, use the `Get-AzAutomationAccount` command to identify your Automation Accounts and the Resource Group where they're deployed.-The properties "ResourceGroupName" and "AutomationAccountName" +The properties **ResourceGroupName** and **AutomationAccountName** are important for next steps. -```powershell +```azurepowershell Get-AzAutomationAccount SubscriptionId : <your subscription id> Discover the configurations in your Automation Account. The output contains one entry per configuration. If you have many, store the information as a variable so it's easier to work with. -```powershell +```azurepowershell Get-AzAutomationDscConfiguration -ResourceGroupName <your resource group name> -AutomationAccountName <your automation account name> ResourceGroupName : <your resource group name> LogVerbose : False ``` Finally, export each configuration to a local script file using the command-"Export-AzAutomationDscConfiguration". The resulting file name uses the +`Export-AzAutomationDscConfiguration`. The resulting file name uses the pattern `\ConfigurationName.ps1`. -```powershell +```azurepowershell Export-AzAutomationDscConfiguration -OutputFolder /<location on your machine> -ResourceGroupName <your resource group name> -AutomationAccountName <your automation account name> -name <your configuration name> UnixMode User Group LastWriteTime Size Name To automate this process, pipe the output of each command above to the next. The example exports 5 configurations. The output pattern is the only indication of success. -```powershell +```azurepowershell Get-AzAutomationAccount | Get-AzAutomationDscConfiguration | Export-AzAutomationDSCConfiguration -OutputFolder /<location on your machine> UnixMode User Group LastWriteTime Size Name the account. For example, to create a list of all modules published to any of your accounts. -```powershell -Get-AzAutomationAccount | Get-AzAutomationModule | ? IsGlobal -eq $false +```azurepowershell +Get-AzAutomationAccount | Get-AzAutomationModule | Where-Object IsGlobal -eq $false ``` You can also use the PowerShell Gallery as an aid in finding details about modules that are publicly available. For example, the list of modules that are built in to new Automation Accounts, and that contain DSC resources, is produced by the following example. -```powershell -Get-AzAutomationAccount | Get-AzAutomationModule | ? IsGlobal -eq $true | Find-Module -erroraction silentlycontinue | ? {'' -ne $_.Includes.DscResource} | Select Name, Version -Unique | format-table -AutoSize +```azurepowershell +Get-AzAutomationAccount | Get-AzAutomationModule | Where-Object IsGlobal -eq $true | Find-Module -ErrorAction SilentlyContinue | Where-Object {'' -ne $_.Includes.DscResource} | Select-Object -Property Name, Version -Unique | Format-Table -AutoSize Name Version - - the feed is registered in your local environment as a The `Find-Module` command in the example doesn't suppress errors, meaning any modules not found in the gallery return an error message. -```powershell -Get-AzAutomationAccount | Get-AzAutomationModule | ? IsGlobal -eq $false | Find-Module | ? {'' -ne $_.Includes.DscResource} | Install-Module +```azurepowershell +Get-AzAutomationAccount | Get-AzAutomationModule | Where-Object IsGlobal -eq $false | Find-Module | Where-Object {'' -ne $_.Includes.DscResource} | Install-Module Installing package xWebAdministration' |
hdinsight | Hdinsight 50 Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-50-component-versioning.md | The Open-source component versions associated with HDInsight 5.0 are listed in t | Component | HDInsight 5.0 | HDInsight 4.0 | ||||-| Apache Spark | 3.1.2 | 2.4.4 | +| Apache Spark | 3.1.3 | 2.4.4 | | Apache Hive | 3.1.2 | 3.1.2 | | Apache Kafka | 2.4.1 | 2.1.1 | | Apache Hadoop | 3.1.1 | 3.1.1 | |
hdinsight | Hdinsight 51 Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-51-component-versioning.md | The Open-source component versions associated with HDInsight 5.1 listed in the f | Component | HDInsight 5.1 | HDInsight 5.0 | ||||-| Apache Spark | 3.3 ** | 3.1.2 | +| Apache Spark | 3.3 * | 3.1.2 | | Apache Hive | 3.1.2 * | 3.1.2 | | Apache Kafka | 3.2.0 ** | 2.4.1 | | Apache Hadoop with YARN | 3.3.4 * | 3.1.1 | The Open-source component versions associated with HDInsight 5.1 listed in the f | Apache HBase | 2.4.11 ** | - | | Apache Sqoop | 1.5.0 * | 1.5.0 | | Apache Oozie | 5.2.1 * | 4.3.1 |-| Apache Zookeeper | 3.6.3 ** | 3.4.6 | +| Apache Zookeeper | 3.6.3 * | 3.4.6 | | Apache Livy | 0.7.1 * | 0.5 | | Apache Ambari | 2.7.0 ** | 2.7.0 | | Apache Zeppelin | 0.10.0 * | 0.8.0 | |
hdinsight | Hdinsight Apache Spark With Kafka | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-spark-with-kafka.md | While you can create an Azure virtual network, Kafka, and Spark clusters manuall 1. Use the following button to sign in to Azure and open the template in the Azure portal. - <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fhditutorialdata.blob.core.windows.net%2Farmtemplates%2Fcreate-linux-based-kafka-spark-cluster-in-vnet-v4.1.json" target="_blank"><img src="./media/hdinsight-apache-spark-with-kafka/hdi-deploy-to-azure1.png" alt="Deploy to Azure button for new cluster"></a> + <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FHDInsight%2Fhdinsight-kafka-tools%2Fmaster%2Fsrc%2Farm%2FHDInsight4.0%2Fhdinsight-kafka-2.1-spark-2.4-vnet%2Fazuredeploy.json" target="_blank"><img src="./media/hdinsight-apache-spark-with-kafka/hdi-deploy-to-azure1.png" alt="Deploy to Azure button for new cluster"></a> > [!WARNING]- > To guarantee availability of Kafka on HDInsight, your cluster must contain at least three worker nodes. This template creates a Kafka cluster that contains three worker nodes. + > To guarantee availability of Kafka on HDInsight, your cluster must contain at least four worker nodes. This template creates a Kafka cluster that contains four worker nodes. - This template creates an HDInsight 3.6 cluster for both Kafka and Spark. + This template creates an HDInsight 4.0 cluster for both Kafka and Spark. 1. Use the following information to populate the entries on the **Custom deployment** section: |
hdinsight | Hdinsight Phoenix In Hdinsight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-phoenix-in-hdinsight.md | For example, here is a physical table named `product_metrics` with the following ```sql CREATE TABLE product_metrics (- metric_type CHAR(1), + metric_type CHAR(1) NOT NULL, created_by VARCHAR,- created_date DATE, - metric_id INTEGER + created_date DATE NOT NULL, + metric_id INTEGER NOT NULL CONSTRAINT pk PRIMARY KEY (metric_type, created_by, created_date, metric_id)); ``` |
healthcare-apis | How To Use Mapping Debugger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md | -In this article, you'll learn how to use the MedTech service Mapping debugger in the Azure portal. The Mapping debugger is a tool used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations for persistence in the FHIR service. This self-service tool allows you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. +In this article, you'll learn how to use the MedTech service Mapping debugger. The Mapping debugger is a self-service tool that is used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations. > [!TIP] > To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Understand the device message data transformation](understand-service.md). |
machine-learning | How To Administrate Data Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md | When an Azure Storage account is behind a virtual network, the storage firewall When the workspace uses a private endpoint and the storage account is also in the VNet, there are extra validation requirements when using studio: * If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet.-* If the storage account uses a __private endpoint__, the workspace private endpoint and storage service endpoint must be in the same VNet. In this case, they can be in different subnets. +* If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets. ## Azure Data Lake Storage Gen1 |
machine-learning | How To Configure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md | The `ml` extension to the [Azure CLI](/cli/azure/) is the enhanced interface for ## Installation -The new Machine Learning extension **requires Azure CLI version `>=2.15.0`**. Ensure this requirement is met: +The new Machine Learning extension **requires Azure CLI version `>=2.38.0`**. Ensure this requirement is met: :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_version"::: |
machine-learning | How To Inference Onnx Automl Image Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md | The following code returns the best child run based on the relevant primary metr ```python from azure.identity import DefaultAzureCredential from azure.ai.ml import MLClient+mlflow_client = MlflowClient() credential = DefaultAzureCredential() ml_client = None Download the conda environment file and create an environment object to be used conda_file = mlflow_client.download_artifacts( best_run.info.run_id, "outputs/conda_env_v_1_0_0.yml", local_dir-+) from azure.ai.ml.entities import Environment env = Environment( name="automl-images-env-onnx", returned_job_run = mlflow_client.get_run(returned_job.name) # Download run's artifacts/outputs onnx_model_path = mlflow_client.download_artifacts(- best_run.info.run_id, 'outputs/model_'+str(batch_size)+'.onnx', local_dir + returned_job_run.info.run_id, 'outputs/model_'+str(batch_size)+'.onnx', local_dir ) ``` |
machine-learning | How To Log View Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md | Logs can help you diagnose errors and warnings, or track performance metrics lik ``` * If you are doing remote tracking (tracking experiments running outside Azure Machine Learning), configure MLflow to track experiments using Azure Machine Learning. See [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md) for more details. -## Getting started +* To log metrics, parameters, artifacts and models in your experiments in Azure Machine Learning using MLflow, just import MLflow in your script: -To log metrics, parameters, artifacts and models in your experiments in Azure Machine Learning using MLflow, just import MLflow in your training script: --```python -import mlflow -``` + ```python + import mlflow + ``` ### Configuring experiments params = { mlflow.log_params(params) ``` -> [!NOTE] -> Azure Machine Learning SDK v1 logging can't log parameters. We recommend the use of MLflow for tracking experiments as it offers a superior set of features. - ## Logging metrics Metrics, as opposite to parameters, are always numeric. The following table describes how to log specific numeric types: client.log_batch(mlflow.active_run().info.run_id, ## Logging images -MLflow supports two ways of logging images: +MLflow supports two ways of logging images. Both of them persists the given image as an artifact inside of the run. |Logged Value|Example code| Notes| |-|-|-| |Log numpy metrics or PIL image objects|`mlflow.log_image(img, "figure.png")`| `img` should be an instance of `numpy.ndarray` or `PIL.Image.Image`. `figure.png` is the name of the artifact that will be generated inside of the run. It doesn't have to be an existing file.| |Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`| `figure.png` is the name of the artifact that will be generated inside of the run. It doesn't have to be an existing file. | -## Logging other types of data +## Logging files ++In general, files in MLflow are called artifacts. You can log artifacts in multiple ways in Mlflow: |Logged Value|Example code| Notes| |-|-|-| MLflow supports two ways of logging images: |Log a trivial file already existing | `mlflow.log_artifact("path/to/file.pkl")`| Files are always logged in the root of the run. If `artifact_path` is provided, then the file is logged in a folder as indicated in that parameter. | |Log all the artifacts in an existing folder | `mlflow.log_artifacts("path/to/folder")`| Folder structure is copied to the run, but the root folder indicated is not included. | +> [!TIP] +> When __loggiging large files__, you may encounter the error `Failed to flush the queue within 300 seconds`. Usually, it means the operation is timing out before the upload of the file is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_VALUE`. + ## Logging models MLflow introduces the concept of "models" as a way to package all the artifacts required for a given model to function. Models in MLflow are always a folder with an arbitrary number of files, depending on the framework used to generate the model. Logging models has the advantage of tracking all the elements of the model as a single entity that can be __registered__ and then __deployed__. On top of that, MLflow models enjoy the benefit of [no-code deployment](how-to-deploy-mlflow-models.md) and can be used with the [Responsible AI dashboard](how-to-responsible-ai-dashboard.md) in studio. Read the article [From artifacts to models in MLflow](concept-mlflow-models.md) for more information. To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For more details about how to log MLflow models see [Logging MLflow models](how-to-log-mlflow-models.md) For migrating existing models to MLflow, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md). +> [!TIP] +> When __loggiging large models__, you may encounter the error `Failed to flush the queue within 300 seconds`. Usually, it means the operation is timing out before the upload of the model artifacts is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_VALUE`. + ## Automatic logging With Azure Machine Learning and MLflow, users can log metrics, model parameters and model artifacts automatically when training a model. Each framework decides what to track automatically for you. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported. [Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog). mlflow.autolog() ``` > [!TIP]-> You can control what gets automatically logged wit autolog. For instance, if you indicate `mlflow.autolog(log_models=False)`, MLflow will log everything but models for you. Such control is useful in cases where you want to log models manually but still enjoy automatic logging of metrics and parameters. Also notice that some frameworks may disable automatic logging of models if the trained model goes behond specific boundaries. Such behavior depends on the flavor used and we recommend you to view they documentation if this is your case. +> You can control what gets automatically logged with autolog. For instance, if you indicate `mlflow.autolog(log_models=False)`, MLflow will log everything but models for you. Such control is useful in cases where you want to log models manually but still enjoy automatic logging of metrics and parameters. Also notice that some frameworks may disable automatic logging of models if the trained model goes behond specific boundaries. Such behavior depends on the flavor used and we recommend you to view they documentation if this is your case. ## View jobs/runs information with MLflow |
machine-learning | How To Share Models Pipelines Across Workspaces With Registries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-models-pipelines-across-workspaces-with-registries.md | ml_client_workspace = MLClient( credential=credential, workspace_name = "<workspace-name>") print(ml_client_workspace) -ml_client_registry = MLClient ( credential=credential, - registry_name = "<registry-name>") +ml_client_registry = MLClient(credential=credential, + registry_name="<REGISTRY_NAME>", + registry_location="<REGISTRY_REGION>") print(ml_client_registry) ``` |
machine-learning | How To Troubleshoot Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md | These pre-created environments also allow for faster deployment time. In user-managed environments, you're responsible for setting up your environment and installing every package that your training script needs on the compute target. Also be sure to include any dependencies needed for model deployment. -These types of environments have two subtypes. For the first type, BYOC (bring your own container), you bring an existing Docker image to AzureML. For the second type, Docker build context based environments, Azure Machine Learning materializes the image from the context that you provide. +These types of environments have two subtypes. For the first type, BYOC (bring your own container), you bring an existing Docker image to Azure Machine Learning. For the second type, Docker build context based environments, Azure Machine Learning materializes the image from the context that you provide. When you want conda to manage the Python environment for you, use a system-managed environment.-AzureML creates a new isolated conda environment by materializing your conda specification on top of a base Docker image. By default, AzureML adds common features to the derived image. +Azure Machine Learning creates a new isolated conda environment by materializing your conda specification on top of a base Docker image. By default, Azure Machine Learning adds common features to the derived image. Any Python packages present in the base image aren't available in the isolated conda environment. ### Create and manage environments There are some ways to decrease the impact of vulnerabilities: Reproducibility is one of the foundations of software development. When you're developing production code, a repeated operation must guarantee the same result. Mitigating vulnerabilities can disrupt reproducibility by changing dependencies. -AzureML's primary focus is to guarantee reproducibility. Environments fall under three categories: curated, +Azure Machine Learning's primary focus is to guarantee reproducibility. Environments fall under three categories: curated, user-managed, and system-managed. **Curated environments** are pre-created environments that Azure Machine Learning manages and are available by default in every Azure Machine Learning workspace provisioned. compute target and for model deployment. These types of environments have two su Once you install more dependencies on top of a Microsoft-provided image, or bring your own base image, vulnerability management becomes your responsibility. -You use **system-managed environments** when you want conda to manage the Python environment for you. AzureML creates a new isolated conda environment by materializing your conda specification on top of a base Docker image. While Azure Machine Learning patches base images with each release, whether you use the +You use **system-managed environments** when you want conda to manage the Python environment for you. Azure Machine Learning creates a new isolated conda environment by materializing your conda specification on top of a base Docker image. While Azure Machine Learning patches base images with each release, whether you use the latest image may be a tradeoff between reproducibility and vulnerability management. So, it's your responsibility to choose the environment version used for your jobs or model deployments while using system-managed environments. To create a new environment, you must use one of the following approaches: * The directory should contain a Dockerfile and any other files needed to build the image * [Sample here](https://aka.ms/azureml/environment/create-env-build-context-v2) * Conda specification - * You must specify a base Docker image for the environment; AzureML builds the conda environment on top of the Docker image provided + * You must specify a base Docker image for the environment; Azure Machine Learning builds the conda environment on top of the Docker image provided * Provide the relative path to the conda file * [Sample here](https://aka.ms/azureml/environment/create-env-conda-spec-v2) az ml connection create --file connection.yml --resource-group my-resource-group **Resources** * [Python SDK v1 workspace connections](https://aka.ms/azureml/environment/set-connection-v1)-* [Python SDK v2 workspace connections](/python/api/azure-ai-ml/azure.ai.ml.entities.workspaceconnection) +* [Python SDK v2 workspace connections](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/connections/connections.ipynb) * [Azure CLI workspace connections](/cli/azure/ml/connection) ### Multiple credentials for base image registry myEnv.docker.base_image_registry.registry_identity = None **Resources** * [Delete a workspace connection v1](https://aka.ms/azureml/environment/delete-connection-v1) * [Python SDK v1 workspace connections](https://aka.ms/azureml/environment/set-connection-v1)-* [Python SDK v2 workspace connections](/python/api/azure-ai-ml/azure.ai.ml.entities.workspaceconnection) +* [Python SDK v2 workspace connections](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/connections/connections.ipynb) * [Azure CLI workspace connections](/cli/azure/ml/connection) ### Secrets in base image registry az ml connection create --file connection.yml --resource-group my-resource-group **Resources** * [Python SDK v1 workspace connections](https://aka.ms/azureml/environment/set-connection-v1)-* [Python SDK v2 workspace connections](/python/api/azure-ai-ml/azure.ai.ml.entities.workspaceconnection) +* [Python SDK v2 workspace connections](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/connections/connections.ipynb) * [Azure CLI workspace connections](/cli/azure/ml/connection) ### Deprecated Docker attribute Ensure that you include a path for your build context ### Missing Dockerfile path <!--issueDescription--> -This issue can happen when AzureML fails to find your Dockerfile. As a default, Azure Machine Learning looks for a Dockerfile named 'Dockerfile' at the root of your build context directory unless you specify a Dockerfile path. +This issue can happen when Azure Machine Learning fails to find your Dockerfile. As a default, Azure Machine Learning looks for a Dockerfile named 'Dockerfile' at the root of your build context directory unless you specify a Dockerfile path. **Potential causes:** * Your Dockerfile isn't at the root of your build context directory and/or is named something other than 'Dockerfile,' and you didn't provide its path env.python.conda_dependencies = conda_dep *Applies to: Azure CLI & Python SDK v2* -You must specify a base Docker image for the environment, and AzureML then builds the conda environment on top of that image +You must specify a base Docker image for the environment, and Azure Machine Learning then builds the conda environment on top of that image * Provide the relative path to the conda file * See how to [create an environment from a conda specification](https://aka.ms/azureml/environment/create-env-conda-spec-v2) env.python.conda_dependencies = conda_dep *Applies to: Azure CLI & Python SDK v2* -You must specify a base Docker image for the environment, and AzureML then builds the conda environment on top of that image +You must specify a base Docker image for the environment, and Azure Machine Learning then builds the conda environment on top of that image * Provide the relative path to the conda file * See how to [create an environment from a conda specification](https://aka.ms/azureml/environment/create-env-conda-spec-v2) This issue can happen when there's a failure in accessing a workspace's associat **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. * Pipeline job failures. * Model deployment failures. <!--/issueDescription--> This issue can happen when a Docker image pull fails during an image build. **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** If the image you're trying to reference doesn't exist in the container registry * Check that you've used the correct tag and that you've set `user_managed_dependencies` to `True`. Setting [user_managed_dependencies](https://aka.ms/azureml/environment/environment-python-section) to `True` disables conda and uses the user's installed packages If you haven't provided credentials for a private registry you're trying to pull from, or the provided credentials are incorrect-* Set [workspace connections](https://aka.ms/azureml/environment/set-connection-v1) for the container registry if needed +* Set [workspace connections](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/connections/connections.ipynb) for the container registry if needed ++**Resources** +* [Workspace connections v1](https://aka.ms/azureml/environment/set-connection-v1) ### I/O Error <!--issueDescription--> This issue can happen when a Docker image pull fails due to a network issue. **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when a package listed in your conda specification is inval **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when there's a failure in communicating with the entity fr **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when there's a failure building a package required for the **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when a command isn't recognized during an image build. **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when conda package resolution takes too long to complete. **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when conda package resolution fails due to available memor **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when one or more conda packages listed in your specificati **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when a Python module listed in your conda specification do **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when there's no package found that matches the version you **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when building wheels for mpi4py fails. **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** Ensure that you have a working MPI installation (preference for MPI-3 support an * If needed, follow these [steps on building MPI](https://mpi4py.readthedocs.io/en/stable/appendix.html#building-mpi-from-sources) Ensure that you're using a compatible python version-* AzureML requires Python 2.5 or 3.5+, but Python 3.7+ is recommended +* Azure Machine Learning requires Python 2.5 or 3.5+, but Python 3.7+ is recommended * See [mpi4py installation](https://aka.ms/azureml/environment/install-mpi4py) **Resources** because you can't provide interactive authentication during a build **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** az ml connection create --file connection.yml --resource-group my-resource-group **Resources** * [Python SDK v1 workspace connections](https://aka.ms/azureml/environment/set-connection-v1)-* [Python SDK v2 workspace connections](/python/api/azure-ai-ml/azure.ai.ml.entities.workspaceconnection) +* [Python SDK v2 workspace connections](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/connections/connections.ipynb) * [Azure CLI workspace connections](/cli/azure/ml/connection) ### Forbidden blob This issue can happen when an attempt to access a blob in a storage account is r **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when the conda environment fails to be created or updated **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when the conda command isn't recognized during conda envir **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when there's a package specified in your conda environment **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when you've specified a package on the command line using **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when there's a failure decoding a character in your conda **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> ## *Pip issues during build* This issue can happen when your image build fails during Python package installa **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when pip fails to uninstall a Python package that the oper **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** This issue can happen when you haven't specified any targets and no makefile is **Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. **Troubleshooting steps** * Ensure that you've spelled the makefile correctly This issue can happen when there's a failure in pushing a Docker image to a cont **Affected areas (symptoms):** * Failure in building environments from the UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** If you aren't using a virtual network, or if you've configured it correctly, tes **Affected areas (symptoms):** * A successful build, but no available logs. * Failure in building environments from UI, SDK, and CLI.-* Failure in running jobs because AzureML implicitly builds the environment in the first step. +* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step. <!--/issueDescription--> **Troubleshooting steps** |
machine-learning | Reference Yaml Component Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-command.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | `display_name` | string | Display name of the component in the studio UI. Can be non-unique within the workspace. | | | | `description` | string | Description of the component. | | | | `tags` | object | Dictionary of tags for the component. | | |+| `is_deterministic` | boolean |This option determines if the component will produce the same output for the same input data. You should usually set this to `false` for components that load data from external sources, such as importing data from a URL. This is because the data at the URL might change over time. | | `true` | | `command` | string | **Required.** The command to execute. | | | | `code` | string | Local path to the source code directory to be uploaded and used for the component. | | | | `environment` | string or object | **Required.** The environment to use for the component. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they are not supported for inline environments. | | | |
migrate | Migrate Support Matrix Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md | Support | Details **SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role. **SQL Server versions** | SQL Server 2008 and later are supported. **SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.-**Supported SQL configuration** | Currently, only discovery for standalone SQL Server instances and corresponding databases is supported. +**Supported SQL configuration** | Discovery of standalone, highly available, and disaster protected SQL deployments is supported. Discovery of HADR SQL deployments powered by Always On Failover Cluster Instances and Always On Availability Groups is also supported. **Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) isn't supported. > [!NOTE] |
migrate | Migrate Support Matrix Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md | Support | Details **SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role. **SQL Server versions** | SQL Server 2008 and later are supported. **SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.-**Supported SQL configuration** | Currently, only discovery for standalone SQL Server instances and corresponding databases is supported. +**Supported SQL configuration** | Discovery of standalone, highly available, and disaster protected SQL deployments is supported. Discovery of HADR SQL deployments powered by Always On Failover Cluster Instances and Always On Availability Groups is also supported. **Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) isn't supported. > [!NOTE] |
mysql | Concepts Query Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-store.md | This view returns all the data in Query Store. There is one row for each distinc | `execution_count` | bigint(20)| NO| The number of times the query got executed for this timestamp ID / during the configured interval period| | `warning_count` | bigint(20)| NO| Number of warnings this query generated during the internal| | `error_count` | bigint(20)| NO| Number of errors this query generated during the interval|-| `sum_timer_wait` | double| YES| Total execution time of this query during the interval| -| `avg_timer_wait` | double| YES| Average execution time for this query during the interval| -| `min_timer_wait` | double| YES| Minimum execution time for this query| -| `max_timer_wait` | double| YES| Maximum execution time| +| `sum_timer_wait` | double| YES| Total execution time of this query during the interval in milliseconds| +| `avg_timer_wait` | double| YES| Average execution time for this query during the interval in milliseconds| +| `min_timer_wait` | double| YES| Minimum execution time for this query in milliseconds| +| `max_timer_wait` | double| YES| Maximum execution time in milliseconds| | `sum_lock_time` | bigint(20)| NO| Total amount of time spent for all the locks for this query execution during this time window| | `sum_rows_affected` | bigint(20)| NO| Number of rows affected| | `sum_rows_sent` | bigint(20)| NO| Number of rows sent to client| |
openshift | Howto Add Update Pull Secret | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-add-update-pull-secret.md | oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson ### Verify that pull secret is in place ```-oc exec $(oc get pod -n openshift-apiserver -o jsonpath="{.items[0].metadata.name}") -- cat /var/lib/kubelet/config.json +oc exec -n openshift-apiserver $(oc get pod -n openshift-apiserver -o jsonpath="{.items[0].metadata.name}") -- cat /var/lib/kubelet/config.json ``` After the secret is set, you're ready to enable Red Hat Certified Operators. |
openshift | Tutorial Create Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-create-cluster.md | az aro create \ After executing the `az aro create` command, it normally takes about 35 minutes to create a cluster. +#### Selecting a different ARO version ++You can select to use a specific version of ARO when creating your cluster. First, use the CLI to query for available ARO versions: ++`az aro get-versions --location <region>` ++Once you've chosen the version, specify it using the `--version` parameter in the `aro create` command: ++```azurecli-interactive +az aro create \ + --resource-group $RESOURCEGROUP \ + --name $CLUSTER \ + --vnet aro-vnet \ + --master-subnet master-subnet \ + --worker-subnet worker-subnet + --version <x.y.z> +``` + ## Next steps In this part of the tutorial, you learned how to: |
operator-nexus | Concepts Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-observability.md | Title: "Azure Operator Nexus: observability using Azure Monitor" -description: Operator Nexus uses Azure Monitor and collects and aggregates data in Azure Log Analytics workspace. The analysis, visualization, and alerting is performed on this collected data. +description: Operator Nexus uses Azure Monitor and collects and aggregates data in Azure Log Analytics Workspace (LAW). The analysis, visualization, and alerting is performed on this collected data. Previously updated : 01/31/2023 #Required; mm/dd/yyyy format. Last updated : 03/06/2023 #Required; mm/dd/yyyy format. The key highlights of Operator Nexus observability framework are: This article helps you understand Operator Nexus observability framework that consists of a stack of components: - Azure Monitor collects and aggregates logging data from the Operator Nexus components-- Azure Log Analytics workspace collects and aggregates logging data from multiple Azure subscriptions and tenants+- Azure Log Analytics Workspace (LAW) collects and aggregates logging data from multiple Azure subscriptions and tenants - Analysis, visualization, and alerting are performed on the aggregated log data. ## Platform Monitoring These logs and metrics are used to observe the state of the platform. You can se ### Monitoring Data Operator Nexus observability allows you to collect the same kind of data as other Azure-resources. The data collected from each of your instances can be viewed in your LAW (Log Analytics workspace). +resources. The data collected from each of your instances can be viewed in your LAW. You can learn about monitoring Azure resources [here](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data). The set of infrastructure components includes: * Undercloud Control Plane (Kubernetes cluster responsible for deployment and managing lifecycle of overall Platform). Collection of log data from these layers is enabled by default during the creation of your Operator Nexus-instance. These collected logs are routed to your Azure Monitor Log -Analytics Workspace. +instance. These collected logs are routed to your Azure Monitor LAW. You can also collect data from the tenant layers created for running Containerized and Virtualized Network Functions. The log data that can be collected includes: created for running Containerized and Virtualized Network Functions. The log dat * Collection of logs from AKS-Hybrid clusters and the applications deployed on top. You'll need to enable the collection of the logs from the tenant AKS-Hybrid clusters and Virtual Machines.-You should follow the steps to deploy the [Azure monitoring agents](/azure/azure-monitor/agents/agents-overview#install-the-agent-and-configure-data-collection). The data would be collected in your Azure Log -Analytics Workspace. +You should follow the steps to deploy the [Azure monitoring agents](/azure/azure-monitor/agents/agents-overview#install-the-agent-and-configure-data-collection). The data would be collected in your Azure LAW. ### Operator Nexus Logs storage See **[Getting Started with Azure Metrics Explorer](/azure/azure-monitor/essenti #### Workbooks Workbooks combine text, log queries, metrics, and parameters for data analysis and the creation of multiple kinds of rich visualizations.-You can use the sample Azure Resource Manager workbook templates for [Operator Nexus Logging and Monitoring](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services) to deploy Azure Workbooks within your Azure Log Analytics Workspace. +You can use the sample Azure Resource Manager workbook templates for [Operator Nexus Logging and Monitoring](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services) to deploy Azure Workbooks within your Azure LAW. #### Alerts You can use the sample Azure Resource Manager alarm templates for [Operator Nexus alerting rules](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services#alert-rules). You should specify thresholds and conditions for the alerts. You can then deploy these alert templates on your on-premises environment. -## Log analytic workspace +## Log Analytic Workspace -A [Log Analytics workspace (LAW)](/azure/azure-monitor/logs/log-analytics-workspace-overview) +A [LAW](/azure/azure-monitor/logs/log-analytics-workspace-overview) is a unique environment to log data from Azure Monitor and other Azure services. Each workspace has its own data repository and configuration but may combine data from multiple services. Each workspace consists of multiple data tables. -A single Log Analytics workspace can be created to collect all relevant data or multiple workspaces based on operator requirements. +A single LAW can be created to collect all relevant data or multiple workspaces based on operator requirements. |
operator-nexus | Concepts Resource Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-resource-types.md | Figure: Resource model The Operator Nexus Cluster (or Instance) platform components include the infrastructure and the platform components used to manage these infrastructure resources. -### Network Fabric controller +### Network Fabric Controller -The Network fabric Controller (NFC) is a resource that automates the life cycle management of all network devices deployed in an Operator Nexus instance. +The Network Fabric Controller (NFC) is a resource that automates the life cycle management of all network devices deployed in an Operator Nexus instance. NFC is hosted in a [Microsoft Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) in an Azure region. The region should be connected to your on-premises network via [Microsoft Azure ExpressRoute](/azure/expressroute/expressroute-introduction).-An NFC can manage the network fabric of many (subject to limits) Operator Nexus instances. +An NFC can manage the Network Fabric of many (subject to limits) Operator Nexus instances. -### Network fabric +### Network Fabric -The Network fabric resource models a collection of network devices, compute servers, and storage appliances, and their interconnections. The network fabric resource also includes the networking required for your Network Functions and workloads. Each Operator Nexus instance has one Network fabric. +The Network Fabric resource models a collection of network devices, compute servers, and storage appliances, and their interconnections. The Network Fabric resource also includes the networking required for your network functions and workloads. Each Operator Nexus instance has one Network Fabric. -The Network fabric Controller (NFC) performs the lifecycle management of the network fabric. -It configures and bootstraps the network fabric resources. +The Network Fabric Controller (NFC) performs the lifecycle management of the Network Fabric. +It configures and bootstraps the Network Fabric resources. ### Cluster manager The CM and the NFC are hosted in the same Azure subscription. ### Cluster The Cluster (or Compute Cluster) resource models a collection of racks, bare metal machines, storage, and networking.-Each cluster is mapped to the on-premises Network fabric. A cluster provides a holistic view of the deployed compute capacity. +Each cluster is mapped to the on-premises Network Fabric. A cluster provides a holistic view of the deployed compute capacity. Cluster capacity examples include the number of vCPUs, the amount of memory, and the amount of storage space. A cluster is also the basic unit for compute and storage upgrades. -### Network rack +### Network Rack -The Network rack consists of Consumer Edge (CE) routers, Top of Rack switches (ToRs), storage appliance, Network Packet Broker (NPB), and the Terminal Server. -The rack also models the connectivity to the operator's Physical Edge switches (PEs) and the ToRs on the other racks. +The Network Rack consists of Consumer Edge (CE) routers, Top of Rack switches (ToRs), storage appliance, Network Packet Broker (NPB), and the Terminal Server (TS). +The Rack also models the connectivity to the operator's Physical Edge switches (PEs) and the ToRs on the other Racks. ### Rack Workload components are resources that you use in hosting your workloads. The Network resources represent the virtual networking in support of your workloads hosted on VMs or AKS-Hybrid clusters. There are five Network resource types that represent a network attachment to an underlying isolation-domain. -- **Cloud Services Network Resource**: provides VMs/AKS-Hybrid clusters access to cloud services such as DNS, NTP, and user-specified Azure PaaS services. You must create at least one Cloud Services Network in each of your Operator Nexus instances. Each Cloud Service Network can be reused by many VMs and/or AKS-Hybrid clusters.+- **Cloud Services Network Resource**: provides VMs/AKS-Hybrid clusters access to cloud services such as DNS, NTP, and user-specified Azure PaaS services. You must create at least one Cloud Services Network (CSN) in each of your Operator Nexus instances. Each CSN can be reused by many VMs and/or AKS-Hybrid clusters. - **Default CNI Network Resource**: supports configuring of the AKS-Hybrid cluster network resources. |
operator-nexus | Howto Baremetal Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md | +- Power off the BMM - Start the BMM - Make the BMM unschedulable or schedulable - Reinstall the BMM image This article describes how to perform lifecycle management operations on Bare Me 1. Install the latest version of the [appropriate CLI extensions](./howto-install-cli-extensions.md) 1. Ensure that the target bare metal machine (server) must have its `poweredState` set to `On` and have its `readyState` set to `True`-1. Get the Resource group name that you created for `network cloud cluster resource` +1. Get the Resource group name that you created for `Cluster` resource -## Power-off bare metal machines +## Power off the BMM This command will `power-off` the specified `bareMetalMachineName`. This command will `power-off` the specified `bareMetalMachineName`. --resource-group "resourceGroupName" ``` -## Start bare metal machine +## Start the BMM This command will `start` the specified `bareMetalMachineName`. |
postgresql | Concepts Business Continuity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md | Below are some planned maintenance scenarios. These events typically incur up to | **Scenario** | **Process**| | - | -- | -| <b>Compute scaling (User-initiated)| During compute scaling operation, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, storage is detached, and then it is shut down. A new flexible server with the same database server name is provisioned with the scaled compute configuration. The storage is then attached to the new server and the database is started which performs recovery if necessary before accepting client connections. | -| <b>Scaling up storage (User-initiated) | When a scaling up storage operation is initiated, active checkpoints are allowed to complete, client connections are drained, and any uncommitted transactions are canceled. After that the server is shut down. The storage is scaled to the desired size and then attached to the new server. A recovery is performed if needed before accepting client connections. Note that scaling down of the storage size is not supported. | +| <b>Compute scaling (User-initiated)| During compute scaling operation, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, storage is detached, and then it's shut down. A new flexible server with the same database server name is provisioned with the scaled compute configuration. The storage is then attached to the new server and the database is started which performs recovery if necessary before accepting client connections. | +| <b>Scaling up storage (User-initiated) | When a scaling up storage operation is initiated, active checkpoints are allowed to complete, client connections are drained, and any uncommitted transactions are canceled. After that the server is shut down. The storage is scaled to the desired size and then attached to the new server. A recovery is performed if needed before accepting client connections. Note that scaling down of the storage size isn't supported. | | <b>New software deployment (Azure-initiated) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance, and you can schedule when those activities to happen. For more information, check your [portal](https://aka.ms/servicehealthpm). | | <b>Minor version upgrades (Azure-initiated) | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. The database server is automatically restarted with the new minor version. For more information, see [documentation](../concepts-monitoring.md#planned-maintenance-notification). You can also check your [portal](https://aka.ms/servicehealthpm).| When the flexible server is configured with **high availability**, the flexible ## Unplanned downtime mitigation -Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime cannot be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention. +Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime can't be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention. Though we continuously strive to provide high availability, there are times when Azure Database for PostgreSQL - Flexible Server service does incur outage causing unavailability of the databases and thus impacting your application. When our service monitoring detects issues that cause widespread connectivity errors, failures or performance issues, the service automatically declares an outage to keep you informed. ### Service Outage -In the event of the Azure Database for PostgreSQL - Flexible Server service outage, you will be able to see additional details related to the outage in the following places. +In the event of the Azure Database for PostgreSQL - Flexible Server service outage, you'll be able to see additional details related to the outage in the following places. * **Azure Portal Banner** If your subscription is identified to be impacted, there will be an outage alert of a Service Issue in your Azure portal **Notifications**. When you create support ticket from **Help + support** or **Support + troublesho * **Service Help** The **Service Health** page in the Azure portal contains information about Azure data center status globally. Search for "service health" in the search bar in the Azure portal, then view Service issues in the Active events category. You can also view the health of individual resources in the **Resource health** page of any resource under the Help menu. A sample screenshot of the Service Health page follows, with information about an active service issue in Southeast Asia. :::image type="content" source="./media/business-continuity/service-health-service-issues-example-map.png" alt-text=" Screenshot showing service outage in Service Health portal.":::-F### Unplanned downtime: failure scenarios and service recovery +### Unplanned downtime: failure scenarios and service recovery Below are some unplanned failure scenarios and the recovery process. | **Scenario** | **Recovery process** <br> [Servers configured without zone-redundant HA] | **Recovery process** <br> [Servers configured with Zone-redundant HA] | | - || - | | <B>Database server failure | If the database server is down, Azure will attempt to restart the database server. If that fails, the database server will be restarted on another physical node. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the volume of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. | If the database server failure is detected, the server is failed over to the standby server, thus reducing downtime. For more information, see [HA concepts page](./concepts-high-availability.md). RTO is expected to be 60-120s, with zero data loss. |-| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. The corrupted data block is automatically repaired and a new copy of the data is automatically created. | For any rare and non-recoverable errors such as the entire storage is inaccessible, the flexible server is failed over to the standby replica to reduce the downtime. For more information, see [HA concepts page](./concepts-high-availability.md). | -| <b> Logical/user errors | To recover from user errors, such as accidentally dropped tables or incorrectly updated data, you have to perform a [point-in-time recovery](../concepts-backup.md) (PITR). While performing the restore operation, you specify the custom restore point, which is the time right before the error occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. | These user errors are not protected with high availability as all changes are replicated to the standby replica synchronously. You have to perform point-in-time restore to recover from such errors. | +| <B>Storage failure | Applications don't see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. The corrupted data block is automatically repaired and a new copy of the data is automatically created. | For any rare and non-recoverable errors such as the entire storage is inaccessible, the flexible server is failed over to the standby replica to reduce the downtime. For more information, see [HA concepts page](./concepts-high-availability.md). | +| <b> Logical/user errors | To recover from user errors, such as accidentally dropped tables or incorrectly updated data, you have to perform a [point-in-time recovery](../concepts-backup.md) (PITR). While performing the restore operation, you specify the custom restore point, which is the time right before the error occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. | These user errors aren't protected with high availability as all changes are replicated to the standby replica synchronously. You have to perform point-in-time restore to recover from such errors. | | <b> Availability zone failure | To recover from a zone-level failure, you can perform point-in-time restore using the backup and choosing a custom restore point with the latest time to restore the latest data. A new flexible server will be deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover. | Flexible server is automatically failed over to the standby server within 60-120s with zero data loss. For more information, see [HA concepts page](./concepts-high-availability.md). | | <b> Region failure | If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server will be provisioned and recovered to the last available data that was copied to this region. <br /> <br /> You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure. | Same process. | |
postgresql | Concepts Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md | All incoming connections that use earlier versions of the TLS protocol, such as [Certificate authentication](https://www.postgresql.org/docs/current/auth-cert.html) is performed using **SSL client certificates** for authentication. In this scenario, PostgreSQL server compares the CN (common name) attribute of the client certificate presented, against the requested database user. **Azure Database for PostgreSQL - Flexible Server does not support SSL certificate based authentication at this time.** -To determine your current SSL connection status you can load the [sslinfo extension](concepts-extensions.md) and then call the `ssl_is_used()` function to determine if SSL is being used. The function returns t if the connection is using SSL, otherwise it returns f. --+To determine your current SSL connection status you can load the [sslinfo extension](concepts-extensions.md) and then call the `ssl_is_used()` function to determine if SSL is being used. The function returns t if the connection is using SSL, otherwise it returns f. You can also collect all the information about your Azure Database for PostgreSQL - Flexible Server instance's SSL usage by process, client, and application by using the following query: ++```sql +SELECT datname as "Database name", usename as "User name", ssl, client_addr, application_name, backend_type + FROM pg_stat_ssl + JOIN pg_stat_activity + ON pg_stat_ssl.pid = pg_stat_activity.pid + ORDER BY ssl; +``` ## Next steps * Learn how to create a flexible server by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md). |
postgresql | Quickstart Create Server Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md | Title: 'Quickstart: Create an Azure DB for PostgreSQL Flexible Server - Bicep' + Title: 'Quickstart: Create an Azure Database for PostgreSQL Flexible Server - Bicep' description: In this Quickstart, learn how to create an Azure Database for PostgreSQL Flexible server using Bicep. |
postgresql | Concepts Ssl Connection Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-ssl-connection-security.md | You can enable or disable the **ssl-enforcement** parameter using `Enabled` or ` ```azurecli az postgres server update --resource-group myresourcegroup --name mydemoserver --ssl-enforcement Enabled ```+### Determining SSL connections status ++You can also collect all the information about your Azure Database for PostgreSQL - Single Server instance's SSL usage by process, client, and application by using the following query: +```sql +SELECT datname as "Database name", usename as "User name", ssl, client_addr, application_name, backend_type + FROM pg_stat_ssl + JOIN pg_stat_activity + ON pg_stat_ssl.pid = pg_stat_activity.pid + ORDER BY ssl; +``` ## Ensure your application or framework supports TLS connections |
purview | How To Certify Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-certify-assets.md | To certify an asset, you must be a **data curator** for the collection containin :::image type="content" source="media/how-to-certify-assets/toggle-certification-on.png" alt-text="Toggle an asset to be certified" border="true"::: -1. Save your changes. The asset will now have a "Certified" label next to the asset name. +2. Save your changes. The asset has a "Certified" label next to the asset name. :::image type="content" source="media/how-to-certify-assets/view-certified-asset.png" alt-text="An asset with a certified label" border="true"::: You can use the Microsoft Purview [bulk edit experience](how-to-bulk-edit-assets 1. Select **Apply** -All assets selected will have the "Certified" label. +All assets selected have the "Certified" label. ## Viewing certification labels in Search -When search or browsing the data catalog, you'll see a certification label on any asset that is certified. Certified assets will also be boosted in search results to help data consumers discover them easily. +When search or browsing the data catalog, you see a certification label on any asset that is certified. Certified assets boosted in search results to help data consumers discover them easily. :::image type="content" source="media/how-to-certify-assets/search-certified-assets.png" alt-text="Search results with certified assets" border="true"::: ## Next steps -Discover your assets in the Microsoft Purview data catalog by either: +Discover your assets in the Microsoft Purview Data Catalog by either: - [Browsing the data catalog](how-to-browse-catalog.md) - [Searching the data catalog](how-to-search-catalog.md) |
purview | How To Request Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md | |
purview | How To Workflow Business Terms Approval | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-business-terms-approval.md | |
purview | How To Workflow Manage Requests Approvals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-requests-approvals.md | |
purview | How To Workflow Self Service Data Access Hybrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md | |
purview | Register Scan Azure Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md | Microsoft Purview supports lineage from Azure SQL Database. When you're setting :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction-runs.png" alt-text="Screenshot that shows the screen that runs lineage extraction every six hours."lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction-runs-expanded.png"::: + > [!Note] + > Toggle on **Lineage extraction** will trigger daily scan. + ### Search Azure SQL Database assets and view runtime lineage You can [browse through the data catalog](how-to-browse-catalog.md) or [search the data catalog](how-to-search-catalog.md) to view asset details for Azure SQL Database. The following steps describe how to view runtime lineage details: |
purview | Register Scan On Premises Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md | To create and run a new scan, do the following: :::image type="content" source="media/register-scan-on-premises-sql-server/on-premises-sql-set-up-scan-win-auth.png" alt-text="Set up scan"::: -1. You can scope your scan to specific tables by choosing the appropriate items in the list. +1. You can scope your scan to specific tables by choosing the appropriate items in the list after enter Database name. :::image type="content" source="media/register-scan-on-premises-sql-server/on-premises-sql-scope-your-scan.png" alt-text="Scope your scan"::: |
purview | Sensitivity Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/sensitivity-insights.md | |
reliability | Migrate App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-service.md | An App Service lives in an App Service plan (ASP), and the App Service plan exis - For App Services that aren't configured to be zone redundant, the VM instances are placed in a single zone that is selected by the platform in the selected region. -- For App Services that are configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a VM instance capacity larger than three is specified and the number of instances is a multiple of three (3 * N), the instances will be spread evenly. However, if the number of instances is not a multiple of three, the remainder of the instances will get spread across the remaining one or two zones.+- For App Services that are configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across three zones in the selected region. If a VM instance capacity larger than three is specified and the number of instances is a multiple of three (3 * N), the instances will be spread evenly. However, if the number of instances is not a multiple of three, the remainder of the instances will get spread across the remaining one or two zones. > [!NOTE] > [App Service SLA](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1) is calculated based on the Maximum Available Minutes and Downtime which is standard irrespective of availability zone enablement. Downtime for the App Service is defined as the following. If you want your App Service to use availability zones, redeploy your apps into Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three. It's also important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section. -Applications that are deployed in an App Service plan that has availability zones enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service plans only ensures continued uptime for deployed applications. +Applications that are deployed in an App Service plan that has availability zones enabled will continue to run and serve traffic if a single zone becomes unavailable. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service plans only ensures continued uptime for deployed applications. When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of VMs, or +/- one VM in all of the other zones used by the App Service plan. |
resource-mover | Support Matrix Extension Resource Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-extension-resource-types.md | -This article summarizes all the [Extension resource types](/articles/azure-resource-manager/management/extension-resource-types.md) that are currently supported while moving Azure resources across regions using Azure resource mover. +This article summarizes all the [Extension resource types ](/articles/azure-resource-manager/management/extension-resource-types.md)that are currently supported while moving Azure resources across regions using Azure resource mover. + ## Extension resource types supported |
role-based-access-control | Classic Administrators | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/classic-administrators.md | This article describes how to add or change the Co-Administrator and Service Adm ## Add a guest user as a Co-Administrator +> [!NOTE] +> Removing a guest user from your Azure Active Directory does not remove the guest user's classic Co-Administrator access to a subscription. You must follow the steps in the [Remove a Co-Administrator](#remove-a-co-administrator) section to remove the guest user's classic Co-Administrator access to a subscription. + To add a guest user as a Co-Administrator, follow the same steps as in the previous [Add a Co-Administrator](#add-a-co-administrator) section. The guest user must meet the following criteria: - The guest user must have a presence in your directory. This means that the user was invited to your directory and accepted the invite. |
site-recovery | Encryption Feature Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/encryption-feature-deprecation.md | |
site-recovery | Hyper V Azure Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-troubleshoot.md | |
site-recovery | Hyper V Azure Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md | Title: Set up Hyper-V disaster recovery by using Azure Site Recovery description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without SCVMM) to Azure by using Site Recovery. Previously updated : 01/16/2023- Last updated : 03/02/2023+ + # Set up disaster recovery of on-premises Hyper-V VMs to Azure |
site-recovery | Hyper V Prepare On Premises Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-prepare-on-premises-tutorial.md | Title: Prepare on-premises Hyper-V servers for disaster recovery by using Azure description: Learn how to prepare on-premises Hyper-V VMs for disaster recovery to Azure by using Azure Site Recovery. Previously updated : 11/12/2019- Last updated : 03/02/2023+ + # Prepare on-premises Hyper-V servers for disaster recovery to Azure |
site-recovery | Upgrade 2012R2 To 2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-2012R2-to-2016.md | |
static-web-apps | Languages Runtimes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/languages-runtimes.md | The following versions are supported for managed functions in Static Web Apps. I [!INCLUDE [Languages and runtimes](../../includes/static-web-apps-languages-runtimes.md)] +## Re-enabling proxies in v4.x ++Azure Functions supports [re-enabling proxies in v4.x](../azure-functions/legacy-proxies.md#re-enable-proxies-in-functions-v4x). To enable proxy support in managed functions for your static web app, set `SWA_ENABLE_PROXIES_MANAGED_FUNCTIONS` to `true` in your application settings. ++[!NOTE] While proxies are supported in v4.x, consider using Azure API Management integration with your managed function apps, so your app isn't reliant on proxies. + ## Deprecations > [!NOTE] |
storage | Blob Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md | The following list describes features and capabilities that are available in the - **Inventory reports for blobs and containers** - You can generate inventory reports for blobs and containers. A report for blobs can contain base blobs, snapshots, content length, blob versions and their associated properties such as creation time, last modified time. A report for containers describes containers and their associated properties such as immutability policy status, legal hold status. + You can generate inventory reports for blobs and containers. A report for blobs can contain base blobs, snapshots, content length, blob versions and their associated properties such as creation time, last modified time. Empty containers arenΓÇÖt listed in the Blob Inventory report. A report for containers describes containers and their associated properties such as immutability policy status, legal hold status. - **Custom Schema** |
synapse-analytics | Apache Spark Cdm Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md | Title: Azure Synapse Spark Common Data Model (CDM) connector -description: Learn how to use the Azure Synapse Spark CDM connector to read and write CDM entities in a CDM folder on ADLS. + Title: Spark Common Data Model connector for Azure Synapse Analytics +description: Learn how to use the Spark CDM connector in Azure Synapse Analytics to read and write Common Data Model entities in a Common Data Model folder on Azure Data Lake Storage. Last updated 02/03/2023 -# Common Data Model (CDM) Connector for Azure Synapse Spark +# Spark Common Data Model connector for Azure Synapse Analytics -The Synapse Spark Common Data Model (CDM) format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. +The Spark Common Data Model connector (Spark CDM connector) is a format reader/writer in Azure Synapse Analytics. It enables a Spark program to read and write Common Data Model entities in a Common Data Model folder via Spark DataFrames. -For information on defining CDM documents using CDM 1.2 see. [What is CDM and how to use it](/common-data-model/). +For information on defining Common Data Model documents by using Common Data Model 1.2, see [this article about what Common Data Model is and how to use it](/common-data-model/). -## High level functionality +## Capabilities -The following capabilities are supported: +At a high level, the connector supports: -* Reading data from an entity in a CDM folder into a Spark dataframe -* Writing from a Spark dataframe to an entity in a CDM folder based on a CDM entity definition -* Writing from a Spark dataframe to an entity in a CDM folder based on the dataframe schema +* Spark 2.4, 3.1, and 3.2. +* Reading data from an entity in a Common Data Model folder into a Spark DataFrame. +* Writing from a Spark DataFrame to an entity in a Common Data Model folder based on a Common Data Model entity definition. +* Writing from a Spark DataFrame to an entity in a Common Data Model folder based on the DataFrame schema. -## Capabilities +The connector also supports: -* Supports reading and writing to CDM folders in ADLS gen2 with HNS enabled. -* Supports reading from CDM folders described by either manifest or model.json files. -* Supports writing to CDM folders described by a manifest file. -* Supports data in CSV format with/without column headers and with user selectable delimiter character. -* Supports data in Apache Parquet format, including nested Parquet. -* Supports submanifests on read, optional use of entity-scoped submanifests on write. -* Supports writing data using user modifiable partition patterns. -* Supports use of managed identity Synapse and credentials. -* Supports resolving CDM aliases locations used in imports using CDM adapter definitions described in a config.json. -* Parallel writes are not supported. It is not recommended. There is no locking mechanism at the storage layer. +* Reading and writing to Common Data Model folders in Azure Data Lake Storage with a hierarchical namespace (HNS) enabled. +* Reading from Common Data Model folders described by either manifest or *model.json* files. +* Writing to Common Data Model folders described by a manifest file. +* Data in CSV format with or without column headers, and with user-selectable delimiter characters. +* Data in Apache Parquet format, including nested Parquet. +* Submanifests on read, and optional use of entity-scoped submanifests on write. +* Writing data via user-modifiable partition patterns. +* Use of managed identities and credentials in Azure Synapse Analytics. +* Resolving Common Data Model alias locations used in imports via Common Data Model adapter definitions described in a *config.json* file. ## Limitations -The following scenarios aren't supported: --* Programmatic access to entity metadata after reading an entity. -* Programmatic access to set or override metadata when writing an entity. -* Schema drift - where data in a dataframe being written includes extra attributes not included in the entity definition. -* Schema evolution - where entity partitions reference different versions of the entity definition -* Write support for model.json isn't supported. -* Executing ```com.microsoft.cdm.BuildInfo.version``` will verify the version +The connector doesn't support the following capabilities and scenarios: -Spark 2.4, 3.1, and 3.2 are supported. +* Parallel writes. We don't recommend them. There's no locking mechanism at the storage layer. +* Programmatic access to entity metadata after you read an entity. +* Programmatic access to set or override metadata when you're writing an entity. +* Schema drift, where data in a DataFrame that's being written includes extra attributes not included in the entity definition. +* Schema evolution, where entity partitions reference different versions of the entity definition. You can verify the version by running `com.microsoft.cdm.BuildInfo.version`. +* Write support for *model.json*. +* Writing `Time` data to Parquet. Currently, the connector supports overriding a time stamp column to be interpreted as a Common Data Model `Time` value rather than a `DateTime` value for CSV files only. +* The Parquet `Map` type, arrays of primitive types, and arrays of array types. Common Data Model doesn't currently support them, so neither does the Spark CDM connector. ## Samples-Checkout the [sample code and CDM files](https://github.com/Azure/spark-cdm-connector/tree/spark3.2/samples) for a quick start. ++To start using the connector, check out the [sample code and Common Data Model files](https://github.com/Azure/spark-cdm-connector/tree/spark3.2/samples). ## Reading data -When reading data, the connector uses metadata in the CDM folder to create the dataframe based on the resolved entity definition for the specified entity, as referenced in the manifest. Entity attribute names are used as dataframe column names. Attribute datatypes are mapped to the column datatype. When the dataframe is loaded, it's populated from the entity partitions identified in the manifest. +When the connector reads data, it uses metadata in the Common Data Model folder to create the DataFrame based on the resolved entity definition for the specified entity, as referenced in the manifest. The connector uses entity attribute names as DataFrame column names. It maps attribute data types to column data types. When the DataFrame is loaded, it's populated from the entity partitions identified in the manifest. ++The connector looks in the specified manifest and any first-level submanifests for the specified entity. If the required entity is in a second-level or lower submanifest, or if there are multiple entities of the same name in different submanifests, you should specify the submanifest that contains the required entity rather than the root manifest. ++Entity partitions can be in a mix of formats (for example, CSV and Parquet). All the entity data files identified in the manifest are combined into one dataset regardless of format and loaded to the DataFrame. -The connector looks in the specified manifest and any first-level submanifests for the specified entity. If the required entity is in a second-level or lower submanifest, or if there are multiple entities of the same name in different submanifests, then the user should specify the submanifest that contains the required entity rather than the root manifest. -Entity partitions can be in a mix of formats (CSV, Parquet, etc.). All the entity data files identified in the manifest are combined into one dataset regardless of format and loaded to the dataframe. +When the connector reads CSV data, it uses the Spark `failfast` option by default. If the number of columns isn't equal to the number of attributes in the entity, the connector returns an error. -When reading CSV data, the connector uses the Spark FAILFAST option by default. It will return an error if the number of columns isn't equal to the number of attributes in the entity. Alternatively, as of 0.19, permissive mode is now supported. This mode is only supported for CSV files. With the permissive mode, when a CSV row has fewer number of columns than the entity schema, null values will be assigned for the missing columns. When a CSV row has more columns than the entity schema, the columns greater than the entity schema column count will be truncated to the schema column count. Usage is as follows: +Alternatively, as of 0.19, the connector supports permissive mode (only for CSV files). With permissive mode, when a CSV row has a lower number of columns than the entity schema, the connector assigns null values for the missing columns. When a CSV row has more columns than the entity schema, the columns greater than the entity schema column count are truncated to the schema column count. Usage is as follows: ```scala- .option("entity", "permissive") or .option("mode", "failfast") +.option("mode", "permissive") or .option("mode", "failfast") ``` ## Writing data -When writing to a CDM folder, if the entity doesn't already exist in the CDM folder, a new entity and definition is created and added to the CDM folder and referenced in the manifest. Two writing modes are supported: +When the connector writes to a Common Data Model folder, if the entity doesn't already exist in that folder, the connector creates a new entity and definition. It adds the entity and definition to the Common Data Model folder and references them in the manifest. -**Explicit write**: the physical entity definition is based on a logical CDM entity definition that the user specifies. +The connector supports two writing modes: -* The specified logical entity definition is read and resolved to create the physical entity definition used in the CDM folder. If import statements in any directly or indirectly referenced CDM definition file include aliases, then a config.json file that maps these aliases to CDM adapters and storage locations must be provided. For more on the use of aliases, see _Aliases and adapter configuration_ below. -* If the dataframe schema doesn't match the referenced entity definition, an error is returned. Ensure that the column datatypes in the dataframe match the attribute datatypes in the entity, including for decimal data, precision and scale set via traits in CDM. -* If the dataframe is inconsistent with the entity definition an error is returned. -* If the dataframe is consistent: - * If the entity already exists in the manifest, the provided entity definition is resolved and validated against the definition in the CDM folder. If the definitions don't match, an error is returned, otherwise data is written and the partition information in the manifest is updated - * If the entity doesn't exist in the CDM folder, a resolved copy of the entity definition is written to the manifest in the CDM folder and data is written and the partition information in the manifest is updated. +* **Explicit write**: The physical entity definition is based on a logical Common Data Model entity definition that you specify. -**Implicit write**: the entity definition is derived from the dataframe structure. + The connector reads and resolves the specified logical entity definition to create the physical entity definition used in the Common Data Model folder. If import statements in any directly or indirectly referenced Common Data Model definition file include aliases, you must provide a *config.json* file that maps these aliases to Common Data Model adapters and storage locations. + * If the DataFrame schema doesn't match the referenced entity definition, the connector returns an error. Ensure that the column data types in the DataFrame match the attribute data types in the entity, including for decimal data, precision, and scale set via traits in Common Data Model. + * If the DataFrame is inconsistent with the entity definition, the connector returns an error. + * If the DataFrame is consistent: + * If the entity already exists in the manifest, the connector resolves the provided entity definition and validates it against the definition in the Common Data Model folder. If the definitions don't match, the connector returns an error. Otherwise, the connector writes data and updates the partition information in the manifest. + * If the entity doesn't exist in the Common Data Model folder, the connector writes a resolved copy of the entity definition to the manifest in the Common Data Model folder. The connector writes data and updates the partition information in the manifest. -* If the entity doesn't exist in the CDM folder, the implicit definition is used to create the resolved entity definition in the target CDM folder. -* If the entity exists in the CDM folder, the implicit definition is validated against the existing entity definition. If the definitions don't match an error is returned, otherwise data is written and a derived logical entity definition(s) is written into a subfolder of the entity folder. -Data is written to data folder(s) within an entity subfolder. A save mode determines whether the new data overwrites or is appended to existing data, or an error is returned if data exists. The default is to return an error if data already exists. +* **Implicit write**: The entity definition is derived from the DataFrame structure. -## CDM alias integration + * If the entity doesn't exist in the Common Data Model folder, the connector uses the implicit definition to create the resolved entity definition in the target Common Data Model folder. + * If the entity exists in the Common Data Model folder, the connector validates the implicit definition against the existing entity definition. If the definitions don't match, the connector returns an error. Otherwise, the connector writes data, and it writes derived logical entity definitions into a subfolder of the entity folder. -CDM definition files use aliases in import statements to simplify the import statement and allow the location of the imported content to be late bound at execution time. Using aliases: + The connector writes data to data folders within an entity subfolder. A save mode determines whether the new data overwrites or is appended to existing data, or an error is returned if data exists. The default is to return an error if data already exists. -* Facilitates easy organization of CDM files so that related CDM definitions can be grouped together at different locations. -* Allows CDM content to be accessed from different deployed locations at runtime. +## Common Data Model alias integration -The snippet below shows the use of aliases in import statements in a CDM definition file. +Common Data Model definition files use aliases in import statements to simplify the import statements and allow the location of the imported content to be late bound at runtime. Using aliases: ++* Facilitates easy organization of Common Data Model files so that related Common Data Model definitions can be grouped together at different locations. +* Allows Common Data Model content to be accessed from different deployed locations at runtime. ++The following snippet shows the use of aliases in import statements in a Common Data Model definition file: ```Scala "imports": [ The snippet below shows the use of aliases in import statements in a CDM definit ] ``` -In the example above, 'cdm' is used as an alias for the location of the CDM foundations file, and 'core' is used as an alias for the location of the TrackedEntity definition file. +The preceding example uses `cdm` as an alias for the location of the Common Data Model foundations file. It uses `core` as an alias for the location of the `TrackedEntity` definition file. ++Aliases are text labels that are matched to a namespace value in an adapter entry in a Common Data Model *config.json* file. An adapter entry specifies the adapter type (for example, `adls`, `CDN`, `GitHub`, or `local`) and a URL that defines a location. Some adapters support other configuration options, such as a connection timeout. Whereas aliases are arbitrary text labels, the `cdm` alias is treated in a special way. -Aliases are text labels that are matched to a namespace value in an adapter entry in a CDM config.json file. An adapter entry specifies the adapter type (for example "adls", "CDN", "GitHub", "local", etc.) and a URL that defines a location. Some adapters support other configuration options, such as a connection timeout. While aliases are arbitrary text labels, the 'cdm' alias is treated in a special manner as described below. +The Spark CDM connector looks in the entity definition's model root location for the *config.json* file to load. If the *config.json* file is at some other location or you want to override the *config.json* file in the model root, you can provide the location of a *config.json* file by using the `configPath` option. The *config.json* file must contain adapter entries for all the aliases used in the Common Data Model code that's being resolved, or the connector reports an error. -The Spark CDM Connector will look in the entity definition model root location for the config.json file to load. If the config.json file is at some other location or the user seeks to override the config.json file in the model root, then the user can provide the location of a config.json file using the _configPath_ option. The config.json file must contain adapter entries for all the aliases used in the CDM code being resolved or an error will be reported. +The ability to override the *config.json* file means that you can provide runtime-accessible locations for Common Data Model definitions. Ensure that the content that's referenced at runtime is consistent with the definitions that were used when Common Data Model was originally authored. -By being able to override the config.json, the user can provide runtime-accessible locations for CDM definitions. Ensure that the content referenced at runtime is consistent with the definitions used when the CDM was originally authored. +By convention, the `cdm` alias refers to the location of the root-level standard Common Data Model definitions, including the *foundations.cdm.json* file. This file includes the Common Data Model primitive data types and a core set of trait definitions required for most Common Data Model entity definitions. -By convention, the _cdm_ alias is used to refer to the location of the root-level standard CDM definitions, including the foundations.cdm.json file, which includes the CDM primitive datatypes and a core set of trait definitions required for most CDM entity definitions. The _cdm_ alias can be resolved like any other alias using an adapter entry in the config.json file. Alternatively, if an adapter isn't specified or a null entry is provided, then the _cdm_ alias will be resolved by default to the CDM public CDN at `https://cdm-schema.microsoft.com/logical/`. The user can also use the _cdmSource_ option to override how the cdm alias is resolved (see the option details below). Using the _cdmsource_ option is useful if the cdm alias is the only alias used in the CDM definitions being resolved as it can avoid needing to create or reference a config.json file. +You can resolve the `cdm` alias like any other alias, by using an adapter entry in the *config.json* file. If you don't specify an adapter or you provide a null entry, the `cdm` alias is resolved by default to the Common Data Model public content delivery network (CDN) at `https://cdm-schema.microsoft.com/logical/`. -## Parameters, options and save mode +You can also use the `cdmSource` option to override how the `cdm` alias is resolved. Using the `cdmSource` option is useful if the `cdm` alias is the only alias used in the Common Data Model definitions that are being resolved, because it can avoid the need to create or reference a *config.json* file. -For both read and write, the Spark CDM Connector library name is provided as a parameter. A set of options are used to parameterize the behavior of the connector. When writing, a save mode is also supported. +## Parameters, options, and save mode -The connector library name, options and save mode are formatted as follows: +For both reads and writes, you provide the Spark CDM connector's library name as a parameter. You use a set of options to parameterize the behavior of the connector. When you're writing, the connector also supports a save mode. -* dataframe.read.format("com.microsoft.cdm") [.option("option", "value")]* -* dataframe.write.format("com.microsoft.cdm") [.option("option", "value")]* .mode(savemode.\<saveMode\>) +The connector library name, options, and save mode are formatted as follows: -Here's an example of how the connector is used for read, showing some of the options. More examples are provided later. +* `dataframe.read.format("com.microsoft.cdm") [.option("option", "value")]*` +* `dataframe.write.format("com.microsoft.cdm") [.option("option", "value")]* .mode(savemode.\<saveMode\>)` ++Here's an example that shows some of the options in using the connector for reads: ```scala val readDf = spark.read.format("com.microsoft.cdm") val readDf = spark.read.format("com.microsoft.cdm") .load() ``` -### Common READ and WRITE options +### Common read and write options -The following options identify the entity in the CDM folder that is either being read or written to. +The following options identify the entity in the Common Data Model folder that you're reading or writing to. |**Option** |**Description** |**Pattern and example usage** | |||::|-|storage|The endpoint URL for the ADLS gen2 storage account with HNS enabled in which the CDM folder is located. <br/>Use the _dfs_.core.windows.net URL | \<accountName\>.dfs.core.windows.net "myAccount.dfs.core.windows.net"| -|manifestPath|The relative path to the manifest or model.json file in the storage account. For read, can be a root manifest or a submanifest or a model.json. For write, must be a root manifest.|\<container\>/{\<folderPath\>/}\<manifestFileName>, <br/>"mycontainer/default.manifest.cdm.json" "models/hr/employees.manifest.cdm.json" <br/> "models/hr/employees/model.json" (read only) | -|entity| The name of the source or target entity in the manifest. When writing an entity for the first time in a folder, the resolved entity definition will be given this name. Entity name is case sensitive.| \<entityName\> <br/>"customer"| -|maxCDMThreads| The maximum number of concurrent reads while resolving an entity definition. | Any valid integer. for example - 5| +|`storage`|The endpoint URL for the Azure Data Lake Storage account, with HNS enabled, that contains the Common Data Model folder. <br/>Use the `dfs.core.windows.net` URL. | `<accountName>.dfs.core.windows.net` `"myAccount.dfs.core.windows.net"`| +|`manifestPath`|The relative path to the manifest or *model.json* file in the storage account. For reads, it can be a root manifest or a submanifest or a *model.json* file. For writes, it must be a root manifest.|`<container>/{<folderPath>}<manifestFileName>`, <br/>`"mycontainer/default.manifest.cdm.json"` `"models/hr/employees.manifest.cdm.json"` <br/> `"models/hr/employees/model.json"` (read only)| +|`entity`| The name of the source or target entity in the manifest. When you're writing an entity for the first time in a folder, the connector gives the resolved entity definition this name. The entity name is case sensitive.| `<entityName>` <br/>`"customer"`| +|`maxCDMThreads`| The maximum number of concurrent reads while the connector resolves an entity definition. | Any valid integer, such as `5`| > [!NOTE]-> You no longer need to specify a logical entity definition in addition to the physical entity definition in the CDM folder on read. +> You no longer need to specify a logical entity definition in addition to the physical entity definition in the Common Data Model folder on read. ### Explicit write options -The following options identify the logical entity definition that defines the entity being written. The logical entity definition will be resolved to a physical definition that defines how the entity will be written. +The following options identify the logical entity definition for the entity that's being written. The logical entity definition is resolved to a physical definition that defines how the entity is written. -|**Option** |**Description** |**Pattern / example usage** | +|**Option** |**Description** |**Pattern or example usage** | |||::|-|entityDefinitionStorage |The ADLS gen2 storage account containing the entity definition. Required if different to the storage account hosting the CDM folder.|\<accountName\>.dfs.core.windows.net<br/>"myAccount.dfs.core.windows.net"| -|entityDefinitionModelRoot|The location of the model root or corpus within the account. |\<container\>/\<folderPath\> <br/> "crm/core"<br/>| -|entityDefinitionPath|Location of the entity. File path to the CDM definition file relative to the model root, including the name of the entity in that file.|\<folderPath\>/\<entityName\>.cdm.json/\<entityName\><br/>"sales/customer.cdm.json/customer"| -configPath| The container and folder path to a config.json file that contains the adapter configurations for all aliases included in the entity definition file and any directly or indirectly referenced CDM files. **Not required if the config.json is in the model root folder.**| \<container\>\<folderPath\>| -|useCdmStandardModelRoot | Indicates the model root is located at [https://cdm-schema.microsoft.com/CDM/logical/](https://github.com/microsoft/CDM/tree/master/schemaDocuments) <br/>Used to reference entity types defined in the CDM Content Delivery Network (CDN).<br/>Overrides: entityDefinitionStorage, entityDefinitionModelRoot if specified.<br/>| "useCdmStandardModelRoot" | -|cdmSource|Defines how the 'cdm' alias if present in CDM definition files is resolved. If this option is used, it overrides any _cdm_ adapter specified in the config.json file. Values are "builtin" or "referenced". Default value is "referenced" <br/> If set to _referenced_, then the latest published standard CDM definitions at `https://cdm-schema.microsoft.com/logical/` are used. If set to _builtin_ then the CDM base definitions built in to the CDM object model used by the Spark CDM Connector will be used. <br/> Note: <br/> 1). The Spark CDM Connector may not be using the latest CDM SDK so may not contain the latest published standard definitions. <br/> 2). The built-in definitions only include the top-level CDM content such as foundations.cdm.json, primitives.cdm.json, etc. If you wish to use lower-level standard CDM definitions, either use _referenced_ or include a cdm adapter in the config.json.<br/>| "builtin"\|"referenced". | +|`entityDefinitionStorage` |The Azure Data Lake Storage account that contains the entity definition. Required if it's different from the storage account that hosts the Common Data Model folder.|`<accountName>.dfs.core.windows.net`<br/>`"myAccount.dfs.core.windows.net"`| +|`entityDefinitionModelRoot`|The location of the model root or corpus within the account. |`<container>/<folderPath>` <br/> `"crm/core"`| +|`entityDefinitionPath`|The location of the entity. It's the file path to the Common Data Model definition file relative to the model root, including the name of the entity in that file.|`<folderPath>/<entityName>.cdm.json/<entityName>`<br/>`"sales/customer.cdm.json/customer"`| +`configPath`| The container and folder path to a *config.json* file that contains the adapter configurations for all aliases included in the entity definition file and any directly or indirectly referenced Common Data Model files. <br/><br/>This option is not required if *config.json* is in the model root folder.| `<container><folderPath>`| +|`useCdmStandardModelRoot` | Indicates that the model root is located at [https://cdm-schema.microsoft.com/CDM/logical/](https://github.com/microsoft/CDM/tree/master/schemaDocuments). Used to reference entity types defined in the Common Data Model CDN. Overrides `entityDefinitionStorage` and `entityDefinitionModelRoot` (if specified).<br/>| `"useCdmStandardModelRoot"` | +|`cdmSource`|Defines how the `cdm` alias (if it's present in Common Data Model definition files) is resolved. If you use this option, it overrides any `cdm` adapter specified in the *config.json* file. Values are `builtin` or `referenced`. The default value is `referenced`.<br/><br/> If you set this option to `referenced`, the connector uses the latest published standard Common Data Model definitions at `https://cdm-schema.microsoft.com/logical/`. If you set this option to `builtin`, the connector uses the Common Data Model base definitions built in to the Common Data Model object model that the connector is using. <br/><br/> Note: <br/> * The Spark CDM connector might not be using the latest Common Data Model SDK, so it might not contain the latest published standard definitions. <br/> * The built-in definitions include only the top-level Common Data Model content, such as *foundations.cdm.json* or *primitives.cdm.json*. If you want to use lower-level standard Common Data Model definitions, either use `referenced` or include a `cdm` adapter in *config.json*.| `"builtin"`\|`"referenced"` | -In the example above, the full path to the customer entity definition object is: -`https://myAccount.dfs.core.windows.net/models/crm/core/sales/customer.cdm.json/customer`, where ΓÇÿmodelsΓÇÖ is the container in ADLS. +In the preceding example, the full path to the customer entity definition object is `https://myAccount.dfs.core.windows.net/models/crm/core/sales/customer.cdm.json/customer`. In that path, *models* is the container in Azure Data Lake Storage. ### Implicit write options -If a logical entity definition isn't specified on write, the entity will be written implicitly, based on the dataframe schema. +If you don't specify a logical entity definition on write, the entity is written implicitly, based on the DataFrame schema. -When writing implicitly, a timestamp column will normally be interpreted as a CDM DateTime datatype. This can be overridden to create an attribute of CDM Time datatype by providing a metadata object associated with the column that specifies the datatype. See Handling CDM Time data below for details. +When you're writing implicitly, a time stamp column is normally interpreted as a Common Data Model `DateTime` data type. You can override this interpretation to create an attribute of the Common Data Model `Time` data type by providing a metadata object that's associated with the column that specifies the data type. For details, see [Handling Common Data Model time data](#handling-common-data-model-time-data) later in this article. -Initially, this is supported for CSV files only. Support for writing time data to Parquet will be added in a later release. +Support for writing time data exists for CSV files only. That support currently doesn't extend to Parquet. ### Folder structure and data format options -Folder organization and file format can be changed with the following options. +You can use the following options to change folder organization and file format. -|**Option** |**Description** |**Pattern / example usage** | +|**Option** |**Description** |**Pattern or example usage** | |||::|-|useSubManifest|If true, causes the target entity to be included in the root manifest via a submanifest. The submanifest and the entity definition are written into an entity folder beneath the root. Default is false.|"true"\|"false" | -|format|Defines the file format. Current supported file formats are CSV and Parquet. Default is "csv"|"csv"\|"parquet" <br/> | -|delimiter|CSV only. Defines the delimiter used. Default is comma. | "\|" | -|columnHeaders| CSV only. If true, will add a first row to data files with column headers. Default is "true"|"true"\|"false"| -|compression|Write only. Parquet only. Defines the compression format used. Default is "snappy" |"uncompressed" \| "snappy" \| "gzip" \| "lzo". -|dataFolderFormat|Allows user-definable data folder structure within an entity folder. Allows the use of date and time values to be substituted into folder names using DateTimeFormatter formatting. Non-formatter content must be enclosed in single quotes. Default format is ``` "yyyy"-"MM"-"dd" ``` producing folder names like 2020-07-30| ```year "yyyy" / month "MM"``` <br/> ```"Data"```| +|`useSubManifest`|If `true`, causes the target entity to be included in the root manifest via a submanifest. The submanifest and the entity definition are written into an entity folder beneath the root. Default is `false`.|`"true"`\|`"false"` | +|`format`|Defines the file format. Current supported file formats are CSV and Parquet. Default is `csv`.|`"csv"`\|`"parquet"` <br/> | +|`delimiter`|CSV only. Defines the delimiter that you're using. Default is comma. | `"|"` | +|`columnHeaders`| CSV only. If `true`, adds a first row to data files with column headers. Default is `true`.|`"true"`\|`"false"`| +|`compression`|Write only. Parquet only. Defines the compression format that you're using. Default is `snappy`. |`"uncompressed"` \| `"snappy"` \| `"gzip"` \| `"lzo"` | +|`dataFolderFormat`|Allows a user-definable data folder structure within an entity folder. Allows you to substitute date and time values into folder names by using `DateTimeFormatter` formatting. Non-formatter content must be enclosed in single quotation marks. Default format is `"yyyy"-"MM"-"dd"`, which produces folder names like *2020-07-30*.| `year "yyyy" / month "MM"` <br/> `"Data"`| ### Save mode -The save mode specifies how existing entity data in the CDM folder is handled when writing a dataframe. Options are to overwrite, append to, or error if data already exists. The default save mode is ErrorIfExists +The save mode specifies how the connector handles existing entity data in the Common Data Model folder when you're writing a DataFrame. Options are to overwrite, append to, or return an error if data already exists. The default save mode is `ErrorIfExists`. -|**Mode** |**Description**| +|**Mode**|**Description**| |||-|SaveMode.Overwrite |Will overwrite the existing entity definition if it's changed and replace existing data partitions with the data partitions being written.| -|SaveMode.Append |Will append data being written in new partitions alongside the existing partitions.<br/>Note: append doesn't support changing the schema; if the schema of the data being written is incompatible with the existing entity definition an error will be thrown.| -|SaveMode.ErrorIfExists|Will return an error if partitions already exist.| +|`SaveMode.Overwrite` |Overwrites the existing entity definition if it's changed and replaces existing data partitions with the data partitions that are being written.| +|`SaveMode.Append` |Appends data that's being written in new partitions alongside the existing partitions.<br/><br/>This mode doesn't support changing the schema. If the schema of the data that's being written is incompatible with the existing entity definition, the connector throws an error.| +|`SaveMode.ErrorIfExists`|Returns an error if partitions already exist.| -See _Folder and file organization_ below for details of how data files are named and organized on write. +For details of how data files are named and organized on write, see the [Folder and file naming and organization](#naming-and-organization-of-folders-and-files) section later in this article. ## Authentication -There are three modes of authentication that can be used with the Spark CDM Connector to read/write the CDM metadata and data partitions: Credential Passthrough, SasToken, and App Registration. +You can use three modes of authentication with the Spark CDM connector to read or write the Common Data Model metadata and data partitions: credential passthrough, shared access signature (SAS) token, and app registration. -### Credential pass-through +### Credential passthrough -In Synapse, the Spark CDM Connector supports use of [Managed identities for Azure resource](../../../active-directory/managed-identities-azure-resources/overview.md) to mediate access to the Azure datalake storage account containing the CDM folder. A managed identity is [automatically created for every Synapse workspace](/cli/azure/synapse/workspace/managed-identity). The connector uses the managed identity of the workspace that contains the notebook in which the connector is called to authenticate to the storage accounts being addressed. +In Azure Synapse Analytics, the Spark CDM connector supports the use of [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md) to mediate access to the Azure Data Lake Storage account that contains the Common Data Model folder. A managed identity is [automatically created for every Azure Synapse Analytics workspace](/cli/azure/synapse/workspace/managed-identity). The connector uses the managed identity of the workspace that contains the notebook in which the connector is called to authenticate to the storage accounts. -You must ensure the identity used is granted access to the appropriate storage accounts. Grant **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read access. In both cases, no extra connector options are required. +You must ensure that the chosen identity has access to the appropriate storage accounts: -### SAS token access control options +* Grant Storage Blob Data Contributor permissions to allow the library to write to Common Data Model folders. +* Grant Storage Blob Data Reader permissions to allow only read access. -SaS Token Credential authentication to storage accounts is an extra option for authentication to storage. With SAS token authentication, the SaS token can be at the container or folder level. The appropriate permissions (read/write) are required ΓÇô read manifest/partition only needs read level support, while write requires read and write support. +In both cases, no extra connector options are required. -| **Option** |**Description** |**Pattern and example usage** | +### Options for SAS token-based access control ++SAS token credentials are an extra option for authentication to storage accounts. With SAS token authentication, the SAS token can be at the container or folder level. The appropriate permissions are required: ++* Read permissions for a manifest or partition need only read-level support. +* Write permissions need both read and write support. ++| **Option** |**Description** |**Pattern and example usage** | |-||::|-| sasToken |The sastoken to access the relative storageAccount with the correct permissions | \<token\>| +| `sasToken` |The SAS token to access the relative storage account with the correct permissions | `<token>`| -### Credential-based access control options +### Options for credential-based access control -As an alternative to using a managed identity or a user identity, explicit credentials can be provided to enable the Spark CDM connector to access data. In Azure Active Directory, [create an App Registration](../../../active-directory/develop/quickstart-register-app.md) and then grant this App Registration access to the storage account using either of the following roles: **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read. +As an alternative to using a managed identity or a user identity, you can provide explicit credentials to enable the Spark CDM connector to access data. In Azure Active Directory, [create an app registration](../../../active-directory/develop/quickstart-register-app.md). Then grant this app registration access to the storage account by using either of the following roles: -Once permissions are created, you can pass the app ID, app key, and tenant ID to the connector on each call to it using the options below. It's recommended to use Azure Key Vault to secure these values to ensure they aren't stored in clear text in your notebook file. +* Storage Blob Data Contributor to allow the library to write to Common Data Model folders +* Storage Blob Data Reader to allow only read permissions ++After you create permissions, you can pass the app ID, app key, and tenant ID to the connector on each call to it by using the following options. We recommend that you use Azure Key Vault to store these values to ensure that they aren't stored in clear text in your notebook file. | **Option** |**Description** |**Pattern and example usage** | |-||::|-| appId | The app registration ID used to authenticate to the storage account | \<guid\> | -| appKey | The registered app key or secret | \<encrypted secret\> | -| tenantId | The Azure Active Directory tenant ID under which the app is registered. | \<guid\> | +| `appId` | The app registration ID for authentication to the storage account | `<guid>` | +| `appKey` | The registered app key or secret | `<encrypted secret>` | +| `tenantId` | The Azure Active Directory tenant ID under which the app is registered | `<guid>` | ## Examples -The following examples all use appId, appKey and tenantId variables initialized earlier in the code based on an Azure app registration that has been given Storage Blob Data Contributor permissions on the storage for write and Storage Blob Data Reader permissions for read. +The following examples all use `appId`, `appKey`, and `tenantId` variables. You initialized these variables earlier in the code, based on an Azure app registration: Storage Blob Data Contributor permissions on the storage for write, and Storage Blob Data Reader permissions for read. ### Read -This code reads the Person entity from the CDM folder with manifest in `mystorage.dfs.core.windows.net/cdmdata/contacts/root.manifest.cdm.json`. +This code reads the `Person` entity from the Common Data Model folder with a manifest in `mystorage.dfs.core.windows.net/cdmdata/contacts/root.manifest.cdm.json`: ```scala val df = spark.read.format("com.microsoft.cdm") val df = spark.read.format("com.microsoft.cdm") .load() ``` -### Implicit Write ΓÇô using dataframe schema only +### Implicit write by using a DataFrame schema only -This code writes the dataframe _df_ to a CDM folder with a manifest to `mystorage.dfs.core.windows.net/cdmdata/Contacts/default.manifest.cdm.json` with an Event entity. +The following code writes the `df` DataFrame to a Common Data Model folder with a manifest to `mystorage.dfs.core.windows.net/cdmdata/Contacts/default.manifest.cdm.json` with an event entity. -Event data is written as parquet files, compressed with gzip, that are appended to the folder (new files -are added without deleting existing files). +The code writes event data as Parquet files, compresses it with `gzip`, and appends it to the folder. (The code adds new files without deleting existing files.) ```scala df.write.format("com.microsoft.cdm") .save() ``` -### Explicit Write - using an entity definition stored in ADLS +### Explicit write by using an entity definition stored in Data Lake Storage ++The following code writes the `df` DataFrame to a Common Data Model folder with a manifest at +`https://_mystorage_.dfs.core.windows.net/cdmdata/Contacts/root.manifest.cdm.json` with the `Person` entity. The code writes person data as new CSV files (by default) that overwrite existing files in the folder. -This code writes the dataframe _df_ to a CDM folder with manifest at -`https://_mystorage_.dfs.core.windows.net/cdmdata/Contacts/root.manifest.cdm.json` with the entity Person. Person data is written as new CSV files (by default) which overwrite existing files in the folder. -The Person entity definition is retrieved from -`https://_mystorage_.dfs.core.windows.net/models/cdmmodels/core/Contacts/Person.cdm.json` +The code retrieves the `Person` entity definition from +`https://_mystorage_.dfs.core.windows.net/models/cdmmodels/core/Contacts/Person.cdm.json`. ```scala df.write.format("com.microsoft.cdm") df.write.format("com.microsoft.cdm") .save() ``` -### Explicit Write - using an entity defined in the CDM GitHub +### Explicit write by using an entity defined in the Common Data Model GitHub repo ++The following code writes the `df` DataFrame to a Common Data Model folder with: -This code writes the dataframe _df_ to a CDM folder with the manifest at `https://_mystorage_.dfs.core.windows.net/cdmdata/Teams/root.manifest.cdm.json` and a submanifest containing the TeamMembership entity, created in a TeamMembership subdirectory. TeamMembership data is written to CSV files (the default) that overwrite any existing data files. The TeamMembership entity definition is retrieved from the CDM CDN, at: -[https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json](https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json) +* The manifest at `https://_mystorage_.dfs.core.windows.net/cdmdata/Teams/root.manifest.cdm.json`. +* A submanifest that contains the `TeamMembership` entity that's created in a *TeamMembership* subdirectory. ++`TeamMembership` data is written to CSV files (the default) that overwrite any existing data files. The code retrieves the `TeamMembership` entity definition from the Common Data Model CDN at +[https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json](https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json). ```scala df.write.format("com.microsoft.cdm") df.write.format("com.microsoft.cdm") ## Other considerations -### Spark to CDM datatype mapping +### Mapping data types from Spark to Common Data Model -The following datatype mappings are applied when converting CDM to/from Spark. +The connector applies the following data type mappings when you convert Common Data Model to or from Spark. -|**Spark** |**CDM**| +|**Spark**|**Common Data Model**| |||-|ShortType|SmallInteger| -|IntegerType|Integer| -|LongType |BigInteger| -|DateType |Date| -|Timestamp|DateTime (optionally Time, see below)| -|StringType|String| -|DoubleType|Double| -|DecimalType(x,y)|Decimal (x,y) (default scale and precision are 18,4)| -|FloatType|Float| -|BooleanType|Boolean| -|ByteType|Byte| +|`ShortType`|`SmallInteger`| +|`IntegerType`|`Integer`| +|`LongType` |`BigInteger`| +|`DateType` |`Date`| +|`Timestamp`|`DateTime` (optionally `Time`)| +|`StringType`|`String`| +|`DoubleType`|`Double`| +|`DecimalType(x,y)`|`Decimal (x,y)` (default scale and precision are `18,4`)| +|`FloatType`|`Float`| +|`BooleanType`|`Boolean`| +|`ByteType`|`Byte`| -The CDM Binary datatype isn't supported. +The connector doesn't support the Common Data Model `Binary` data type. -### Handling CDM Date, DateTime, and DateTimeOffset data +### Handling Common Data Model Date, DateTime, and DateTimeOffset data -CDM Date and DateTime datatype values are handled as normal for Spark and Parquet, and in CSV are read/written in ISO 8601 format. +The Spark CDM connector handles Common Data Model `Date` and `DateTime` data types as normal for Spark and Parquet. In CSV, the connector reads and writes those data types in ISO 8601 format. -CDM _DateTime_ datatype values are _interpreted as UTC_, and in CSV written in ISO 8601 format, for example, -2020-03-13 09:49:00Z. +The connector interprets Common Data Model `DateTime` data type values as UTC. In CSV, the connector writes those values in ISO 8601 format. An example is `2020-03-13 09:49:00Z`. -CDM _DateTimeOffset_ values intended for recording local time instants are handled differently in Spark and -parquet from CSV. While CSV and other formats can express a local time instant as a structure, -comprising a datetime and a UTC offset, formatted in CSV like, 2020-03-13 09:49:00-08:00, Parquet and -Spark donΓÇÖt support such structures. Instead, they use a TIMESTAMP datatype that allows an instant to -be recorded in UTC time (or in some unspecified time zone). +Common Data Model `DateTimeOffset` values intended for recording local time instants are handled differently in Spark and Parquet from CSV. CSV and other formats can express a local time instant as a structure that comprises a datetime, such as `2020-03-13 09:49:00-08:00`. Parquet and Spark don't support such structures. Instead, they use a `TIMESTAMP` data type that allows an instant to be recorded in UTC (or in an unspecified time zone). -The Spark CDM connector will convert a DateTimeOffset value in CSV to a UTC timestamp. This will be persisted as a Timestamp in parquet and if subsequently persisted to CSV, the value will be serialized as a DateTimeOffset with a +00:00 offset. Importantly, there's no loss of temporal accuracy ΓÇô the serialized values represent the same instant as the original values, although the offset is lost. Spark systems use their system time as the baseline and normally express time using that local time. UTC times can always be computed by applying the local system offset. For Azure systems in all regions, system time is always UTC, so all timestamp values will normally be in UTC. +The Spark CDM connector converts a `DateTimeOffset` value in CSV to a UTC time stamp. This value is persisted as a time stamp in Parquet. If the value is later persisted to CSV, it will be serialized as a `DateTimeOffset` value with a +00:00 offset. There's no loss of temporal accuracy. The serialized values represent the same instant as the original values, although the offset is lost. -As Azure system values are always UTC, when using implicit write, where a CDM definition is derived from a dataframe, timestamp columns are translated to attributes with CDM DateTime datatype, which implies a UTC time. +Spark systems use their system time as the baseline and normally express time by using that local time. UTC times can always be computed through application of the local system offset. For Azure systems in all regions, the system time is always UTC, so all time stamp values are normally in UTC. When you're using an implicit write, where a Common Data Model definition is derived from a DataFrame, time stamp columns are translated to attributes with the Common Data Model `DateTime` data type, which implies a UTC time. -If it's important to persist a local time and the data will be processed in Spark or persisted in parquet, -then it's recommended to use a DateTime attribute and keep the offset in a separate attribute, for -example as a signed integer value representing minutes. In CDM, DateTime values are UTC, so the -offset must be applied when needed to compute local time. +If it's important to persist a local time and the data will be processed in Spark or persisted in Parquet, we recommend that you use a `DateTime` attribute and keep the offset in a separate attribute. For example, you can keep the offset as a signed integer value that represents minutes. In Common Data Model, DateTime values are in UTC, so you must apply the offset to compute local time. -In most cases, persisting local time isn't important. Local times are often only required in a UI for user -convenience and based on the userΓÇÖs time zone, so not storing a UTC time is often a better solution. +In most cases, persisting local time isn't important. Local times are often required only in a UI for user convenience and based on the user's time zone, so not storing a UTC time is often a better solution. -### Handling CDM time data +### Handling Common Data Model time data -Spark doesn't support an explicit Time datatype. An attribute with the CDM _Time_ datatype is represented in a Spark dataframe as a column with a Timestamp datatype in a dataframe. When a time value is read, the timestamp in the dataframe will be initialized with the Spark epoch date 01/01/1970 plus the time value as read from the source. +Spark doesn't support an explicit `Time` data type. An attribute with the Common Data Model `Time` data type is represented in a Spark DataFrame as a column with a `Timestamp` data type. When The Spark CDM connector reads a time value, the time stamp in the DataFrame is initialized with the Spark epoch date 01/01/1970 plus the time value as read from the source. -When using explicit write, a timestamp column can be mapped to either a DateTime or Time attribute. If a timestamp is mapped to a Time attribute, the date portion of the timestamp is stripped off. +When you use explicit write, you can map a time stamp column to either a `DateTime` or `Time` attribute. If you map a time stamp to a `Time` attribute, the date portion of the time stamp is stripped off. -When using implicit write, a Timestamp column is mapped by default to a DateTime attribute. To map a timestamp column to a Time attribute, you must add a metadata object to the column in the dataframe that indicates that the timestamp should be interpreted as a time value. The code below shows how this is done in Scala. +When you use implicit write, a time stamp column is mapped by default to a `DateTime` attribute. To map a time stamp column to a `Time` attribute, you must add a metadata object to the column in the DataFrame that indicates that the time stamp should be interpreted as a time value. The following code shows how to do this in Scala: ```scala val md = new MetadataBuilder().putString(ΓÇ£dataTypeΓÇ¥, ΓÇ£TimeΓÇ¥) StructField(ΓÇ£ATimeColumnΓÇ¥, TimeStampType, true, md)) ### Time value accuracy -The Spark CDM Connector supports time values in either DateTime or Time with seconds having up to six decimal places, based on the format of the data either in the file being read (CSV or Parquet) or as defined in the dataframe, enabling accuracy from single seconds to microseconds. --### Folder and file naming and organization +The Spark CDM connector supports time values in either `DateTime` or `Time`. Seconds have up to six decimal places, based on the format of the data in the file that's being read (CSV or Parquet) or as defined in the DataFrame. The use of six decimal places enables accuracy from single seconds to microseconds. -When writing CDM folders, the default folder organization illustrated below is used. +### Naming and organization of folders and files -By default, data files are written into folders created for the current date, named like '2010-07-31'. The folder structure and names can be customized using the dateFolderFormat option, described earlier. +When you're writing to Common Data Model folders, there's a default folder organization. By default, data files are written into folders created for the current date, named like *2010-07-31*. You can customize the folder structure and names by using the `dateFolderFormat` option. Data file names are based on the following pattern: \<entity\>-\<jobid\>-*.\<fileformat\>. -The number of data partitions written can be controlled using the sparkContext.parallelize() method. The number of partitions is either determined by the number of executors in the Spark cluster or can be specified explicitly. The Scala example below creates a dataframe with two partitions. +You can control the number of data partitions that are written by using the `sparkContext.parallelize()` method. The number of partitions is either determined by the number of executors in the Spark cluster or specified explicitly. The following Scala example creates a DataFrame with two partitions: ```scala val df= spark.createDataFrame(spark.sparkContext.parallelize(data, 2), schema) ``` -**Explicit Write** (defined by a referenced entity definition) +Here's an example of an explicit write that's defined by a referenced entity definition: ```text +-- <CDMFolder>- |-- default.manifest.cdm.json << with entity ref and partition info + |-- default.manifest.cdm.json << with entity reference and partition info +-- <Entity> |-- <entity>.cdm.json << resolved physical entity definition |-- <data folder> val df= spark.createDataFrame(spark.sparkContext.parallelize(data, 2), schema) +-- ... ``` -**Explicit Write with sub-manifest:** +Here's an example of an explicit write with a submanifest: ```text +-- <CDMFolder>- |-- default.manifest.cdm.json << contains reference to sub-manifest + |-- default.manifest.cdm.json << contains reference to submanifest +-- <Entity> |-- <entity>.cdm.json- |-- <entity>.manifest.cdm.json << sub-manifest with partition info + |-- <entity>.manifest.cdm.json << submanifest with partition info |-- <data folder> |-- <data folder> +-- ... ``` -**Implicit (entity definition is derived from dataframe schema):** +Here's an example of an implicit write in which the entity definition is derived from a DataFrame schema: ```text +-- <CDMFolder> val df= spark.createDataFrame(spark.sparkContext.parallelize(data, 2), schema) +-- <Entity> |-- <entity>.cdm.json << resolved physical entity definition +-- LogicalDefinition- | +-- <entity>.cdm.json << logical entity definition(s) + | +-- <entity>.cdm.json << logical entity definitions |-- <data folder> |-- <data folder> +-- ... ``` -**Implicit Write with sub-manifest:** +Here's an example of an implicit write with a submanifest: ```text +-- <CDMFolder>- |-- default.manifest.cdm.json << contains reference to sub-manifest + |-- default.manifest.cdm.json << contains reference to submanifest +-- <Entity> |-- <entity>.cdm.json << resolved physical entity definition- |-- <entity>.manifest.cdm.json << sub-manifest with reference to the entity and partition info + |-- <entity>.manifest.cdm.json << submanifest with reference to the entity and partition info +-- LogicalDefinition- | +-- <entity>.cdm.json << logical entity definition(s) + | +-- <entity>.cdm.json << logical entity definitions |-- <data folder> |-- <data folder> +-- ... val df= spark.createDataFrame(spark.sparkContext.parallelize(data, 2), schema) ## Troubleshooting and known issues -* Ensure the decimal precision and scale of decimal data type fields used in the dataframe match the data type used in the CDM entity definition - requires precision and scale traits are defined on the data type. If the precision and scale aren't defined explicitly in CDM, the default used is Decimal(18,4). For model.json files, Decimal is assumed to be Decimal(18,4). -* Folder and file names in the options below shouldn't include spaces or special characters, such as "=": manifestPath, entityDefinitionModelRoot, entityDefinitionPath, dataFolderFormat. --## Unsupported features --The following features aren't yet supported: --* Overriding a timestamp column to be interpreted as a CDM Time rather than a DateTime is initially supported for CSV files only. Support for writing Time data to Parquet will be added in a later release. -* Parquet Map type and arrays of primitive types and arrays of array types aren't currently supported by CDM so aren't supported by the Spark CDM Connector. +* Ensure that the decimal precision and scale of decimal data type fields that you use in the DataFrame match the data type that's in the Common Data Model entity definition. If the precision and scale aren't defined explicitly in Common Data Model, the default is `Decimal(18,4)`. For *model.json* files, `Decimal` is assumed to be `Decimal(18,4)`. +* Folder and file names in the following options shouldn't include spaces or special characters, such as an equal sign (=): `manifestPath`, `entityDefinitionModelRoot`, `entityDefinitionPath`, `dataFolderFormat`. ## Next steps |
update-center | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md | + + Title: What's new in Update management center (Preview) +description: Learn about what's new and recent updates in the Update management center (Preview) service. ++++ Last updated : 03/03/2023+++# What's new in Update management center (Preview) ++[Update management center (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update management center (Preview). ++## November 2022 ++### New region support ++Update management center (Preview) now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). ++## October 2022 ++### Improved on-boarding experience ++You can now enable periodic assessment for your machines at scale using [Policy](periodic-assessment-at-scale.md) or from the [portal](manage-update-settings.md#configure-settings-on-single-vm). +++## Next steps ++- [Learn more](support-matrix.md) about supported regions. |
virtual-desktop | Configure Device Redirections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-device-redirections.md | Title: Configure device redirection - Azure description: How to configure device redirection for Azure Virtual Desktop. Previously updated : 02/24/2023 Last updated : 03/06/2023 Set the following RDP property to configure WebAuthn redirection: When enabled, WebAuthn requests from the session are sent to the local PC to be completed using the local Windows Hello for Business or security devices like FIDO keys. For more information, see [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication-preview). +## Disable drive redirection ++If you're making RDP connections from personal resources to corporate ones on the Terminal Server or Windows Desktop clients, you can disable drive redirection for security purposes. To disable drive redirection: ++1. Open the **Registry Editor (regedit)**. ++2. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **Terminal Server Client**. ++3. Create the following registry key: ++ - **Key**: HKLM\\Software\\Microsoft\\Terminal Server Client + - **Type**: REG_DWORD + - **Name**: DisableDriveRedirection ++4. Set the value of the registry key to **0**. ++## Disable printer redirection ++If you're making RDP connections from personal resources to corporate ones on the Terminal Server or Windows Desktop clients, you can disable printer redirection for security purposes. To disable printer redirection: ++1. Open the **Registry Editor (regedit)**. ++1. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **Terminal Server Client**. ++1. Create the following registry key: ++ - **Key**: HKLM\\Software\\Microsoft\\Terminal Server Client + - **Type**: REG_DWORD + - **Name**: DisablePrinterRedirection ++1. Set the value of the registry key to **0**. ++## Disable clipboard redirection ++If you're making RDP connections from personal resources to corporate ones on the Terminal Server or Windows Desktop clients, you can disable clipboard redirection for security purposes. To disable clipboard redirection: ++1. Open the **Registry Editor (regedit)**. ++1. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **Terminal Server Client**. ++1. Create the following registry key: ++ - **Key**: HKLM\\Software\\Microsoft\\Terminal Server Client + - **Type**: REG_DWORD + - **Name**: DisableClipboardRedirection ++1. Set the value of the registry key to **0**. + ## Next steps - For more information about how to configure RDP settings, see [Customize RDP properties](customize-rdp-properties.md). |
virtual-machines | Capacity Reservation Modify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-modify.md | -# Modify a Capacity Reservation (preview) +# Modify a Capacity Reservation **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets |
virtual-machines | Disks Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md | Title: Select a disk type for Azure IaaS VMs - managed disks description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 02/06/2023 Last updated : 03/06/2023 Standard SSDs are designed to provide single-digit millisecond latencies and the ### Standard SSD transactions -For standard SSDs, each I/O operation less than or equal to 256 KiB of throughput is considered a single I/O operation. I/O operations larger than 256 KiB of throughput are considered multiple I/Os of size 256 KiB. These transactions incur a billable cost. +For standard SSDs, each I/O operation less than or equal to 256 KiB of throughput is considered a single I/O operation. I/O operations larger than 256 KiB of throughput are considered multiple I/Os of size 256 KiB. These transactions incur a billable cost but, there is an hourly limit on the number of transactions that can incur a billable cost. If that hourly limit is reached, additional transactions during that hour no longer incur a cost. For details, see the [blog post](https://aka.ms/billedcapsblog). ### Standard SSD Bursting |
virtual-machines | Dlsv5 Dldsv5 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dlsv5-dldsv5-series.md | Dlsv5-series virtual machines do not have any temporary storage thus lowering th | Standard_D2ls_v5 | 2 | 4 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 | | Standard_D4ls_v5 | 4 | 8 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_D8ls_v5 | 8 | 16 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 |-| Standard_D16s_v5 | 16 | 32 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 8 | 12500 | +| Standard_D16ls_v5 | 16 | 32 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 8 | 12500 | | Standard_D32ls_v5 | 32 | 64 | Remote Storage Only | 32 | 51200/865 | 80000/2000 | 8 | 16000 | | Standard_D48ls_v5 | 48 | 96 | Remote Storage Only | 32 | 76800/1315 | 80000/3000 | 8 | 24000 | | Standard_D64ls_v5 | 64 | 128 | Remote Storage Only | 32 | 80000/1735 | 80000/3000 | 8 | 30000 | For more information on Disks Types: [Disk Types](./disks-types.md#ultra-disks) ## Next steps -Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs. +Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs. |
virtual-machines | Features Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-windows.md | The Azure VM Agent manages interactions between an Azure VM and the Azure fabric The Azure VM Agent is preinstalled on Azure Marketplace images. It can also be installed manually on supported operating systems. -The agent runs on multiple operating systems. However, the extensions framework has a [limit for the operating systems that extensions use](https://support.microsoft.com/en-us/help/4078134/azure-extension-supported-operating-systems). Some extensions are not supported across all operating systems and might emit error code 51 ("Unsupported OS"). Check the individual extension documentation for supportability. +The agent runs on multiple operating systems. However, the extensions framework has a [limit for the operating systems that extensions use](https://support.microsoft.com/en-us/help/4078134/azure-extension-supported-operating-systems). Some extensions aren't supported across all operating systems and might emit error code 51 ("Unsupported OS"). Check the individual extension documentation for supportability. ### Network access If you use a [supported version of the Azure VM Agent](https://support.microsoft Agents can only be used to download extension packages and reporting status. For example, if an extension installation needs to download a script from GitHub (Custom Script extension) or needs access to Azure Storage (Azure Backup), then you need to open additional firewall or network security group (NSG) ports. Different extensions have different requirements, because they're applications in their own right. For extensions that require access to Azure Storage or Azure Active Directory, you can allow access by using Azure NSG [service tags](../../virtual-network/network-security-groups-overview.md#service-tags). -The Azure VM Agent does not have proxy server support for you to redirect agent traffic requests through. That means the Azure VM Agent will rely on your custom proxy (if you have one) to access resources on the internet or on the host through IP 168.63.129.16. +The Azure VM Agent doesn't have proxy server support for you to redirect agent traffic requests through. That means the Azure VM Agent relies on your custom proxy (if you have one) to access resources on the internet or on the host through IP 168.63.129.16. ## Discover VM extensions Set-AzVMCustomScriptExtension -ResourceGroupName "myResourceGroup" ` -Run "Create-File.ps1" -Location "West US" ``` -The following example uses the [VMAccess extension](/troubleshoot/azure/virtual-machines/reset-rdp) to reset the administrative password of a Windows VM to a temporary password. After you run this code, you should reset the password at first login. +The following example uses the [VMAccess extension](/troubleshoot/azure/virtual-machines/reset-rdp#reset-by-using-the-vmaccess-extension-and-powershell) to reset the administrative password of a Windows VM to a temporary password. After you run this code, you should reset the password at first sign-in. ```powershell $cred=Get-Credential You can use the [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextens ### Azure portal -You can apply VM extensions to an existing VM through the Azure portal. Select the VM in the portal, select **Extensions**, and then select **Add**. Choose the extension that you want from the list of available extensions, and follow the instructions in the wizard. +You can apply VM extensions to an existing VM through the Azure portal. Select the VM in the portal, select **Extensions + applications**, and then select **Add**. Choose the extension that you want from the list of available extensions, and follow the instructions in the wizard. The following example shows the installation of the Microsoft Antimalware extension from the Azure portal: For more information on creating ARM templates, see [Virtual machines in an Azur When you run a VM extension, it might be necessary to include sensitive information such as credentials, storage account names, and access keys. Many VM extensions include a protected configuration that encrypts data and only decrypts it inside the target VM. Each extension has a specific protected configuration schema, and each is detailed in extension-specific documentation. -The following example shows an instance of the Custom Script extension for Windows. The command to run includes a set of credentials. In this example, the command to run is not encrypted. +The following example shows an instance of the Custom Script extension for Windows. The command to run includes a set of credentials. In this example, the command to run isn't encrypted. ```json { The following troubleshooting actions apply to all VM extensions: - Look at the system logs. Check for other operations that might have interfered with the extension, such as a long-running installation of another application that required exclusive access to the package manager. -- In a VM, if there is an existing extension with a failed provisioning state, any other new extension fails to install.+- In a VM, if there's an existing extension with a failed provisioning state, any other new extension fails to install. ### Common reasons for extension failures |
virtual-machines | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/overview.md | |
virtual-machines | Instance Metadata Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/instance-metadata-service.md | http://169.254.169.254/metadata/<endpoint>/[<filter parameter>/...]?<query param ``` The parameters correspond to the indexes/keys that would be used to walk down the json object were you interacting with a parsed representation. -For example, `/metatadata/instance` returns the json object: +For example, `/metadata/instance` returns the json object: ```json { "compute": { ... }, |
virtual-machines | Maintenance Notifications Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-notifications-powershell.md | You can also get the maintenance status for all VMs in a resource group by using Get-AzVM -ResourceGroupName myResourceGroup -Status ``` -The following PowerShell example takes your subscription ID and returns a list of VMs that are scheduled for maintenance. +The following PowerShell example takes your subscription ID and returns a list of VMs indicating whether they are scheduled for maintenance. ```powershell -function MaintenanceIterator -{ - Select-AzSubscription -SubscriptionId $args[0] -- $rgList= Get-AzResourceGroup -- for ($rgIdx=0; $rgIdx -lt $rgList.Length ; $rgIdx++) - { - $rg = $rgList[$rgIdx] - $vmList = Get-AzVM -ResourceGroupName $rg.ResourceGroupName - for ($vmIdx=0; $vmIdx -lt $vmList.Length ; $vmIdx++) - { - $vm = $vmList[$vmIdx] - $vmDetails = Get-AzVM -ResourceGroupName $rg.ResourceGroupName -Name $vm.Name -Status - if ($vmDetails.MaintenanceRedeployStatus ) - { - Write-Output "VM: $($vmDetails.Name) IsCustomerInitiatedMaintenanceAllowed: $($vmDetails.MaintenanceRedeployStatus.IsCustomerInitiatedMaintenanceAllowed) $($vmDetails.MaintenanceRedeployStatus.LastOperationMessage)" - } - } +function MaintenanceIterator { + param ( + $SubscriptionId + ) + + Select-AzSubscription -SubscriptionId $SubscriptionId | Out-Null ++ $rgList = Get-AzResourceGroup + foreach ($rg in $rgList) { + $vmList = Get-AzVM -ResourceGroupName $rg.ResourceGroupName + foreach ($vm in $vmList) { + $vmDetails = Get-AzVM -ResourceGroupName $rg.ResourceGroupName -Name $vm.Name -Status + [pscustomobject]@{ + Name = $vmDetails.Name + ResourceGroupName = $rg.ResourceGroupName + IsCustomerInitiatedMaintenanceAllowed = [bool]$vmDetails.MaintenanceRedeployStatus.IsCustomerInitiatedMaintenanceAllowed + LastOperationMessage = $vmDetails.MaintenanceRedeployStatus.LastOperationMessage + } }+ } } ``` function MaintenanceIterator Using information from the function in the previous section, the following starts maintenance on a VM if **IsCustomerInitiatedMaintenanceAllowed** is set to true. ```powershell-Restart-AzVM -PerformMaintenance -name $vm.Name -ResourceGroupName $rg.ResourceGroupName ++MaintenanceIterator -SubscriptionId <Subscription ID> | + Where-Object -FilterScript {$_.IsCustomerMaintenanceAllowed} | + Restart-AzVM -PerformMaintenance + ``` ## Classic deployments |
virtual-machines | Share Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery.md | When you share a gallery using RBAC, you need to provide the `imageID` to anyone If you share gallery resources to someone outside of your Azure tenant, they will need your `tenantID` to log in and have Azure verify they have access to the resource before they can use it within their own tenant. You will need to provide them with your `tenantID`, there is no way for someone outside your organization to query for your `tenantID`. > [!IMPORTANT]-> RBAC sharing can be used to share resources with users within the organization (or) users outside the organization (cross-tenant). When the resource is shared with RBAC, please see instructions here to consume the image: +> RBAC sharing can be used to share resources with users within the organization (or) users outside the organization (cross-tenant). Here are the instructions to consume an image shared with RBAC and create VM/VMSS: > -> [RBAC - Shared within your organization](https://learn.microsoft.com/azure/virtual-machines/vm-generalized-image-version?tabs=cli#rbacshared-within-your-organization) +> [RBAC - Shared within your organization](vm-generalized-image-version.md#rbacshared-within-your-organization) > -> [RBAC - Shared from another tenant](https://learn.microsoft.com/azure/virtual-machines/vm-generalized-image-version?tabs=cli#rbacshared-from-another-tenant) +> [RBAC - Shared from another tenant](vm-generalized-image-version.md#rbacshared-from-another-tenant) > ### [Portal](#tab/portal) |
virtual-wan | Virtual Wan Site To Site Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-site-to-site-portal.md | Use the VPN device configuration file to configure your on-premises VPN device. 1. From your Virtual WAN page, go to **Hubs -> Your virtual hub -> VPN (Site to site)** page. -1. At the top of the **VPN (Site to site)** page, click **Download VPN Config**. You'll see a series of messages as Azure creates a new storage account in the resource group 'microsoft-network-[location]', where location is the location of the WAN. You can also add an existing storage account by clicking "Use Existing" and adding a valid SAS URL. +1. At the top of the **VPN (Site to site)** page, click **Download VPN Config**. You'll see a series of messages as Azure creates a new storage account in the resource group 'microsoft-network-[location]', where location is the location of the WAN. You can also add an existing storage account by clicking "Use Existing" and adding a valid SAS URL with write permissions enabled. To learn more about creating a new SAS URL, see [Generate the SAS URL](packet-capture-site-to-site-portal.md#URL). 1. Once the file finishes creating, click the link to download the file. This creates a new file with VPN configuration at the provided SAS url location. To learn about the contents of the file, see [About the VPN device configuration file](#config-file) in this section. |
web-application-firewall | Waf Front Door Exclusion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion.md | description: This article provides information on exclusion lists configuration Previously updated : 10/18/2022 Last updated : 03/07/2023 Header and cookie names are case insensitive. Query strings, POST arguments, and ### Body contents inspection -Some of the managed rules evaluate the raw payload of the request body, before it's parsed into POST arguments or JSON arguments. So, in some situations you might see log entries with a matchVariableName of `InitialBodyContents`. +Some of the managed rules evaluate the raw payload of the request body, before it's parsed into POST arguments or JSON arguments. So, in some situations you might see log entries with a matchVariableName of `InitialBodyContents` or `DecodedInitialBodyContents`. -For example, suppose you create an exclusion with a match variable of *Request body POST args* and a selector to identify and ignore POST arguments named *FOO*. You'll no longer see any log entries with a matchVariableName of `PostParamValue:FOO`. However, if a POST argument named *FOO* contains text that triggers a rule, the log might show the detection in the initial body contents. +For example, suppose you create an exclusion with a match variable of *Request body POST args* and a selector to identify and ignore POST arguments named *FOO*. You'll no longer see any log entries with a matchVariableName of `PostParamValue:FOO`. However, if a POST argument named *FOO* contains text that triggers a rule, the log might show the detection in the initial body contents. You can't currently create exclusions for initial body contents. ## <a name="define-exclusion-based-on-web-application-firewall-logs"></a> Define exclusion rules based on Web Application Firewall logs |