Updates from: 06/10/2023 02:00:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Ad External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-ad-external-identities-videos.md
- Title: Microsoft Azure Active Directory B2C videos -
-description: Microsoft Azure Active Directory B2C Video Series
------- Previously updated : 09/13/2022----
-# Microsoft Azure Active Directory External Identities videos
-
-Learn the basics of External Identities - Azure Active Directory B2C (Azure AD B2C) and Azure Active Directory B2B (Azure AD B2B) in the Microsoft identity platform.
--
-## Azure Active Directory B2C architecture deep dive series
-
-Get a deeper view into the features and technical aspects of the Azure AD B2C service.
--
-| Video title | Video | Video title | Video |
-|:|:|:|:|
-|[Azure AD B2C sign-up sign-in](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=6&t=2s) 10:25 | [:::image type="icon" source="./media/external-identities-videos/customer-sign-up-sign-in.png" border="false":::](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=6) | [Azure AD B2C single sign on and self service password reset](https://www.youtube.com/watch?v=kRV-7PSLK38&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7) 8:40 | [:::image type="icon" source="./media/external-identities-videos/single-sign-on.png" border="false":::](https://www.youtube.com/watch?v=kRV-7PSLK38&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7) |
-| [Application and identity migration to Azure AD B2C](https://www.youtube.com/watch?v=Xw_YwSJmhIQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9) 10:34 | [:::image type="icon" source="./media/external-identities-videos/identity-migration-aad-b2c.png" border="false":::](https://www.youtube.com/watch?v=Xw_YwSJmhIQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9) | [Build resilient and scalable flows using Azure AD B2C](https://www.youtube.com/watch?v=8f_Ozpw9yTs&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=12) 16:47 | [:::image type="icon" source="./media/external-identities-videos/b2c-scalable-flows.png" border="false":::](https://www.youtube.com/watch?v=8f_Ozpw9yTs&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=12) |
-| [Building a custom CIAM solution with Azure AD B2C and ISV alliances](https://www.youtube.com/watch?v=UZjiGDD0wa8&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=8) 10:01 | [:::image type="icon" source="./media/external-identities-videos/build-custom-b2c-solution.png" border="false":::](https://www.youtube.com/watch?v=UZjiGDD0wa8&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=8) | [Protecting Web APIs with Azure AD B2C](https://www.youtube.com/watch?v=wuUu71RcsIo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=10) 19:03 | [:::image type="icon" source="./media/external-identities-videos/protecting-web-apis.png" border="false":::](https://www.youtube.com/watch?v=wuUu71RcsIo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=10) |
-| [Integration of SAML with Azure AD B2C](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=11) 9:09 | [:::image type="icon" source="./media/external-identities-videos/saml-integration.png" border="false":::](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=11) | [Azure AD B2C Identity Protection and Conditional Access](https://www.youtube.com/watch?v=frn5jVqbmUo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=15) 14:44 | [:::image type="icon" source="./media/external-identities-videos/identity-protection-and-conditional-access.png" border="false":::](https://www.youtube.com/watch?v=frn5jVqbmUo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=15)
-
-## Azure Active Directory B2C how to series
-
-Learn how to perform various use cases in Azure AD B2C.
-
-| Video title | Video | Video title | Video |
-|:|:|:|:|
-|[Azure AD: Monitoring and reporting Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) 6:57 | [:::image type="icon" source="./media/external-identities-videos/monitoring-reporting-aad-b2c.png" border="false":::](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) | [Azure AD B2C user migration using Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) 7:09 | [:::image type="icon" source="./media/external-identities-videos/user-migration-msgraph-api.png" border="false":::](https://www.youtube.com/watch?v=9BRXBtkBzL4list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) |
-| [Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) 8:22 | [:::image type="icon" source="./media/external-identities-videos/user-migration-stratagies.png" border="false":::](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) | [How to localize or customize language using Azure AD B2C](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) 20:41 | [:::image type="icon" source="./media/external-identities-videos/language-localization.png" border="false":::](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) |
-|[Configure monitoring: Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=tF2JS6TGc3g&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=14) 17:23 | [:::image type="icon" source="./media/external-identities-videos/configure-monitoring.png" border="false":::](https://www.youtube.com/watch?v=tF2JS6TGc3g&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=14) | [Configuring custom domains in Azure AD B2C using Azure Front Door](https://www.youtube.com/watch?v=mVNB59VK-DQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) 19:45 | [:::image type="icon" source="./media/external-identities-videos/configure-custom-domains.png" border="false":::](https://www.youtube.com/watch?v=mVNB59VK-DQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) |
active-directory-b2c Conditional Access Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-technical-profile.md
The **InputClaims** element contains a list of claims to send to Conditional Acc
| UserId | Yes | string | The identifier of the user who signs in. | | AuthenticationMethodsUsed | Yes |stringCollection | The list of methods the user used to sign in. Possible values: `Password`, and `OneTimePasscode`. | | IsFederated | Yes |boolean | Indicates whether or not a user signed in with a federated account. The value must be `false`. |
-| IsMfaRegistered | Yes |boolean | Indicates whether the user already enrolled a phone number for multi-factor authentication. |
+| IsMfaRegistered | Yes |boolean | Indicates whether the user already enrolled a method for multi-factor authentication. If the value is set to `false`, the Conditional Access policy evaluation returns a `block` challenge if action is required. |
The **InputClaimsTransformations** element may contain a collection of **InputClaimsTransformation** elements that are used to modify the input claims or generate new ones before sending them to the Conditional Access service.
active-directory-b2c Configure Security Analytics Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-security-analytics-sentinel.md
+
+ Title: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel
+
+description: Use Microsoft Sentinel to perform security analytics for Azure Active Directory B2C data.
+++++++ Last updated : 03/06/2023++
+#Customer intent: As an IT professional, I want to gather logs and audit data using Microsoft Sentinel and Azure Monitor to secure applications that use Azure Active Directory B2C.
++
+# Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel
+
+Increase the security of your Azure Active Directory B2C (Azure AD B2C) environment by routing logs and audit information to Microsoft Sentinel. The scalable Microsoft Sentinel is a cloud-native, security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. Use the solution for alert detection, threat visibility, proactive hunting, and threat response for Azure AD B2C.
+
+Learn more:
+
+* [What is Microsoft Sentinel?](../sentinel/overview.md)
+* [What is SOAR?](https://www.microsoft.com/security/business/security-101/what-is-soar)
+
+More uses for Microsoft Sentinel, with Azure AD B2C, are:
+
+* Detect previously undetected threats and minimize false positives with analytics and threat intelligence features
+* Investigate threats with artificial intelligence (AI)
+ * Hunt for suspicious activities at scale, and benefit from the experience of years of cybersecurity work at Microsoft
+* Respond to incidents rapidly with common task orchestration and automation
+* Meet your organization's security and compliance requirements
+
+In this tutorial, learn how to:
+
+* Transfer Azure AD B2C logs to a Log Analytics workspace
+* Enable Microsoft Sentinel in a Log Analytics workspace
+* Create a sample rule in Microsoft Sentinel to trigger an incident
+* Configure an automated response
+
+## Configure Azure AD B2C with Azure Monitor Log Analytics
+
+To define where logs and metrics for a resource are sent,
+
+1. Enable **Diagnostic settings** in Azure AD, in your Azure AD B2C tenant.
+2. Configure Azure AD B2C to send logs to Azure Monitor.
+
+Learn more, [Monitor Azure AD B2C with Azure Monitor](./azure-monitor.md).
+
+## Deploy a Microsoft Sentinel instance
+
+After you configure your Azure AD B2C instance to send logs to Azure Monitor, enable an instance of Microsoft Sentinel.
+
+ >[!IMPORTANT]
+ >To enable Microsoft Sentinel, obtain Contributor permissions to the subscription in which the Microsoft Sentinel workspace resides. To use Microsoft Sentinel, use Contributor or Reader permissions on the resource group to which the workspace belongs.
+
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Select the subscription where the Log Analytics workspace is created.
+3. Search for and select **Microsoft Sentinel**.
+
+ ![Screenshot of Azure Sentinel entered into the search field and the Azure Sentinel option that appears.](./media/configure-security-analytics-sentinel/sentinel-add.png)
+
+3. Select **Add**.
+4. In the **search workspaces** field, select the new workspace.
+
+ ![Screenshot of the search workspaces field under Choose a workspace to add to Azure Sentinel.](./media/configure-security-analytics-sentinel/create-new-workspace.png)
+
+5. Select **Add Microsoft Sentinel**.
+
+ >[!NOTE]
+ >It's possible to run Microsoft Sentinel on more than one workspace, however data is isolated in a single workspace.</br> See, [Quickstart: Onboard Microsoft Sentinel](../sentinel/quickstart-onboard.md)
+
+## Create a Microsoft Sentinel rule
+
+After you enable Microsoft Sentinel, get notified when something suspicious occurs in your Azure AD B2C tenant.
+
+You can create custom analytics rules to discover threats and anomalous behaviors in your environment. These rules search for specific events, or event sets, and alert you when event thresholds or conditions are met. Then incidents are generated for investigation.
+
+See, [Create custom analytics rules to detect threats](../sentinel/detect-threats-custom.md)
+
+ >[!NOTE]
+ >Microsoft Sentinel has templates to create threat detection rules that search your data for suspicious activity. For this tutorial, you create a rule.
+
+### Notification rule for unsuccessful forced access
+
+Use the following steps to receive notification about two or more unsuccessful, forced access attempts into your environment. An example is brute-force attack.
+
+1. In Microsoft Sentinel, from the left menu, select **Analytics**.
+2. On the top bar, select **+ Create** > **Scheduled query rule**.
+
+ ![Screenshot of the Create option under Analytics.](./media/configure-security-analytics-sentinel/create-scheduled-rule.png)
+
+3. In the Analytics Rule wizard, go to **General**.
+4. For **Name**, enter a name for unsuccessful logins.
+5. For **Description**, indicate the rule notifies for two or more unsuccessful sign-ins, within 60 seconds.
+6. For **Tactics**, select a category. For example, select **PreAttack**.
+7. For **Severity**, select a severity level.
+8. **Status** is **Enabled** by default. To change a rule, go to the **Active rules** tab.
+
+ ![Screenshot of Create new rule with options and selections.](./media/configure-security-analytics-sentinel/create-new-rule.png)
+
+9. Select the **Set rule logic** tab.
+10. Enter a query in the **Rule query** field. The query example organizes the sign-ins by `UserPrincipalName`.
+
+ ![Screenshot of query text in the Rule query field under Set rule logic.](./media/configure-security-analytics-sentinel/rule-query.png)
+
+11. Go to **Query scheduling**.
+12. For **Run query every**, enter **5** and **Minutes**.
+13. For **Lookup data from the last**, enter **5** and **Minutes**.
+14. For **Generate alert when number of query results**, select **Is greater than**, and **0**.
+15. For **Event grouping**, select **Group all events into a single alert**.
+16. For **Stop running query after alert is generated**, select **Off**.
+17. Select **Next: Incident settings (Preview)**.
+
+ ![Screenshot of Query scheduling selections and options.](./media/configure-security-analytics-sentinel/query-scheduling.png)
+
+18. Go to the **Review and create** tab to review rule settings.
+19. When the **Validation passed** banner appears, select **Create**.
+
+ ![Screenshot of selected settings, the Validation passed banner, and the Create option.](./media/configure-security-analytics-sentinel/review-create.png)
+
+#### View a rule and related incidents
+
+View the rule and the incidents it generates. Find your newly created custom rule of type **Scheduled** in the table under the **Active rules** tab on the main
+
+1. Go to the **Analytics** screen.
+2. Select the **Active rules** tab.
+3. In the table, under **Scheduled**, find the rule.
+
+You can edit, enable, disable, or delete the rule.
+
+ ![Screenshot of active rules with Enable, Disable, Delete, and Edit options.](./media/configure-security-analytics-sentinel/rule-crud.png)
+
+#### Triage, investigate, and remediate incidents
+
+An incident can include multiple alerts, and is an aggregation of relevant evidence for an investigation. At the incident level, you can set properties such as Severity and Status.
+
+Learn more: [Investigate incidents with Microsoft Sentinel](../sentinel/investigate-cases.md).
+
+1. Go to the **Incidents** page.
+2. Select an incident.
+3. On the right, detailed incident information appears, including severity, entities, events, and the incident ID.
+
+ ![Screenshot that shows incident information.](./media/configure-security-analytics-sentinel/select-incident.png)
+
+4. On the **Incidents** pane, elect **View full details**.
+5. Review tabs that summarize the incident.
+
+ ![Screenshot of a list of incidents.](./media/configure-security-analytics-sentinel/full-details.png)
+
+6. Select **Evidence** > **Events** > **Link to Log Analytics**.
+7. In the results, see the identity `UserPrincipalName` value attempting sign-in.
+
+ ![Screenshot of incident details.](./media/configure-security-analytics-sentinel/logs.png)
+
+## Automated response
+
+Microsoft Sentinel has security orchestration, automation, and response (SOAR) functions. Attach automated actions, or a playbook, to analytics rules.
+
+See, [What is SOAR?](https://www.microsoft.com/security/business/security-101/what-is-soar)
+
+### Email notification for an incident
+
+For this task, use a playbook from the Microsoft Sentinel GitHub repository.
+
+1. Go to a configured playbook.
+2. Edit the rule.
+3. On the **Automated response** tab, select the playbook.
+
+Learn more: [Incident-Email-Notification](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification)
+
+ ![Screenshot of automated response options for a rule.](./media/configure-security-analytics-sentinel/automation-tab.png)
+
+## Resources
+
+For more information about Microsoft Sentinel and Azure AD B2C, see:
+
+* [Azure AD B2C Reports & Alerts, Workbooks](https://github.com/azure-ad-b2c/siem#workbooks)
+* [Microsoft Sentinel documentation](../sentinel/index.yml)
+
+## Next step
+
+[Handle false positives in Microsoft Sentinel](../sentinel/false-positives.md)
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
After you add the custom domain and configure your application, users will still
## (Optional) Azure Front Door advanced configuration
-You can use Azure Front Door advanced configuration, such as [Azure Web Application Firewall (WAF)](partner-azure-web-application-firewall.md). Azure WAF provides centralized protection of your web applications from common exploits and vulnerabilities.
+You can use Azure Front Door advanced configuration, such as [Azure Web Application Firewall (WAF)](partner-web-application-firewall.md). Azure WAF provides centralized protection of your web applications from common exploits and vulnerabilities.
When using custom domains, consider the following points:
active-directory-b2c External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/external-identities-videos.md
+
+ Title: Microsoft Azure Active Directory B2C external identity video series
+
+description: Learn about external identities in Azure AD B2C in the Microsoft identity platform
+++++++ Last updated : 06/08/2023++++
+# Microsoft Azure Active Directory B2C external identity video series
+
+Learn the basics of External Identities - Azure Active Directory B2C (Azure AD B2C) and Azure Active Directory B2B (Azure AD B2B) in the Microsoft identity platform.
+
+## Azure Active Directory B2C architecture deep dive series
+
+Get a deeper view into the features and technical aspects of the Azure AD B2C service.
+
+| Video title | Video | Video title | Video |
+|:|:|:|:|
+|[Azure AD B2C sign-up sign-in](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7&t=2s) 10:25|[:::image type="icon" source="./media/external-identities-videos/customer-sign-up-sign-in.png" border="false":::](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=6)|[Azure AD B2C single sign on and self service password reset](https://www.youtube.com/watch?v=kRV-7PSLK38&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7) 8:40 |[:::image type="icon" source="./media/external-identities-videos/single-sign-on.png" border="false":::](https://www.youtube.com/watch?v=kRV-7PSLK38&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7)|
+|[Application and identity migration to Azure AD B2C](https://www.youtube.com/watch?v=Xw_YwSJmhIQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9) 10:34|[:::image type="icon" source="./media/external-identities-videos/identity-migration.png" border="false":::](https://www.youtube.com/watch?v=Xw_YwSJmhIQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9)| [Build resilient and scalable flows using Azure AD B2C](https://www.youtube.com/watch?v=8f_Ozpw9yTs&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=12) 16:47|[:::image type="icon" source="./media/external-identities-videos/b2c-scalable-flows.png" border="false":::](https://www.youtube.com/watch?v=8f_Ozpw9yTs&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=12)|
+|[Building a custom CIAM solution with Azure AD B2C and ISV alliances](https://www.youtube.com/watch?v=UZjiGDD0wa8&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=8) 10:01|[:::image type="icon" source="./media/external-identities-videos/build-custom-b2c-solution.png" border="false":::](https://www.youtube.com/watch?v=UZjiGDD0wa8&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=8)|[Protecting Web APIs with Azure AD B2C](https://www.youtube.com/watch?v=wuUu71RcsIo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=10) 19:03| [:::image type="icon" source="./media/external-identities-videos/protecting-web-apis.png" border="false":::](https://www.youtube.com/watch?v=wuUu71RcsIo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=10)|
+|[Integration of SAML with Azure AD B2C](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=11) 9:09 | [:::image type="icon" source="./media/external-identities-videos/saml-integration.png" border="false":::](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=11) |[Azure AD B2C Identity Protection and Conditional Access](https://www.youtube.com/watch?v=frn5jVqbmUo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=15) 14:44 | [:::image type="icon" source="./media/external-identities-videos/identity-protection-and-conditional-access.png" border="false":::](https://www.youtube.com/watch?v=frn5jVqbmUo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=15)|
+
+## Azure Active Directory B2C how to series
+
+Learn how to perform various use cases in Azure AD B2C.
+
+| Video title | Video |Video title|Video|
+|:|:|:|:|
+|[Azure AD: Monitoring and reporting Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) 6:57|[:::image type="icon" source="./media/external-identities-videos/monitoring-reporting.png" border="false":::](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2)|[Azure AD B2C user migration using Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) 7:09| [:::image type="icon" source="./media/external-identities-videos/user-migration-msgraph-api.png" border="false":::](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=6)|
+| [Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) 8:22| [:::image type="icon" source="./media/external-identities-videos/user-migration-stratagies.png" border="false":::](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=3)| [How to localize or customize language using Azure AD B2C](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) 20:41| [:::image type="icon" source="./media/external-identities-videos/language-localization.png" border="false":::](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=14) |
+|[Configure monitoring: Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=tF2JS6TGc3g&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=14) 17:23 | [:::image type="icon" source="./media/external-identities-videos/configure-monitoring.png" border="false":::](https://www.youtube.com/watch?v=tF2JS6TGc3g&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=15) | [Configuring custom domains in Azure AD B2C using Azure Front Door](https://www.youtube.com/watch?v=mVNB59VK-DQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) 19:45| [:::image type="icon" source="./media/external-identities-videos/configure-custom-domains.png" border="false":::](https://www.youtube.com/watch?v=mVNB59VK-DQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=14) |
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for Web Application Firewall (WAF).
| ISV partner | Description and integration walkthroughs | |:-|:--| | ![Screenshot of Akamai logo](./medi) allows fine grained manipulation of traffic to protect and secure your identity infrastructure against malicious attacks. |
-| ![Screenshot of Azure WAF logo](./medi) provides centralized protection of your web applications from common exploits and vulnerabilities. |
+| ![Screenshot of Azure WAF logo](./medi) provides centralized protection of your web applications from common exploits and vulnerabilities. |
![Screenshot of Cloudflare logo](./medi) is a WAF provider that helps organizations protect against malicious attacks that aim to exploit vulnerabilities such as SQLi, and XSS. | ## Developer tools
active-directory-b2c Partner Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-web-application-firewall.md
+
+ Title: Tutorial to configure Azure Active Directory B2C with Azure Web Application Firewall
+
+description: Learn to configure Azure AD B2C with Azure Web Application Firewall to protect applications from malicious attacks
+++++++ Last updated : 03/08/2023++++
+# Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall
+
+Learn how to enable the Azure Web Application Firewall (WAF) service for an Azure Active Directory B2C (Azure AD B2C) tenant, with a custom domain. WAF protects web applications from common exploits and vulnerabilities.
+
+>[!NOTE]
+>This feature is in public preview.
+
+See, [What is Azure Web Application Firewall?](../web-application-firewall/overview.md)
+
+## Prerequisites
+
+To get started, you need:
+
+* An Azure subscription
+* If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/)
+* **An Azure AD B2C tenant** ΓÇô authorization server that verifies user credentials using custom policies defined in the tenant
+ * Also known as the identity provider (IdP)
+ * See, [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
+* **Azure Front Door (AFD)** ΓÇô enables custom domains for the Azure AD B2C tenant
+ * See, [Azure Front Door and CDN documentation](../frontdoor/index.yml)
+* **WAF** ΓÇô manages traffic sent to the authorization server
+ * [Azure Web Application Firewall](https://azure.microsoft.com/services/web-application-firewall/#overview)
+
+## Custom domains in Azure AD B2C
+
+To use custom domains in Azure AD B2C, use the custom domain features in AFD. See, [Enable custom domains for Azure AD B2C](./custom-domain.md?pivots=b2c-user-flow).
+
+ > [!IMPORTANT]
+ > After you configure the custom domain, see [Test your custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain).
+
+## Enable WAF
+
+To enable WAF, configure a WAF policy and associate it with the AFD for protection.
+
+### Create a WAF policy
+
+Create a WAF policy with Azure-managed default rule set (DRS). See, [Web Application Firewall DRS rule groups and rules](../web-application-firewall/afds/waf-front-door-drs.md).
+
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Select **Create a resource**.
+3. Search for Azure WAF.
+4. Select **Azure Web Application Firewall (WAF)**.
+5. Select **Create**.
+6. Go to the **Create a WAF policy** page.
+7. Select the **Basics** tab.
+8. For **Policy for**, select **Global WAF (Front Door)**.
+9. For **Front Door SKU**, select between **Basic**, **Standard**, or **Premium** SKU.
+10. For **Subscription**, select your Front Door subscription name.
+11. For **Resource group**, select your Front Door resource group name.
+12. For **Policy name**, enter a unique name for your WAF policy.
+13. For **Policy state**, select **Enabled**.
+14. For **Policy mode**, select **Detection**.
+15. Select **Review + create**.
+16. Go to the **Association** tab of the Create a WAF policy page.
+17. Select **+ Associate a Front Door profile**.
+18. For **Front Door**, select your Front Door name associated with Azure AD B2C custom domain.
+19. For **Domains**, select the Azure AD B2C custom domains to associate the WAF policy to.
+20. Select **Add**.
+21. Select **Review + create**.
+22. Select **Create**.
+
+### Detection and Prevention modes
+
+When you create WAF policy, the policy is in Detection mode. We recommend you don't disable Detection mode. In this mode, WAF doesn't block requests. Instead, requests that match the WAF rules are logged in the WAF logs.
+
+Learn more: [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md)
+
+The following query shows the requests blocked by the WAF policy in the past 24 hours. The details include, rule name, request data, action taken by the policy, and the policy mode.
+
+ ![Screenshot of blocked requests.](./media/partner-web-application-firewall/blocked-requests-query.png)
+
+ ![Screenshot of blocked requests details, such as Rule ID, Action, Mode, etc.](./media/partner-web-application-firewall/blocked-requests-details.png)
+
+Review the WAF logs to determine if policy rules cause false positives. Then, exclude the WAF rules based on the WAF logs.
+
+Learn more: [Define exclusion rules based on Web Application Firewall logs](../web-application-firewall/afds/waf-front-door-exclusion.md#define-exclusion-based-on-web-application-firewall-logs)
+
+#### Switching modes
+
+To see WAF operating, select **Switch to prevention mode**, which changes the mode from Detection to Prevention. Requests that match the rules in the DRS are blocked and logged in the WAF logs.
+
+ ![Screenshot of options and selections for DefaultRuleSet under Web Application Firewall policies.](./media/partner-web-application-firewall/switch-to-prevention-mode.png)
+
+To revert to Detection mode, select **Switch to detection mode**.
+
+ ![Screenshot of DefaultRuleSet with Switch to detection mode.](./media/partner-web-application-firewall/switch-to-detection-mode.png)
+
+## Next steps
+
+* [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md)
+* [Web Application Firewall (WAF) with Front Door exclusion lists](../web-application-firewall/afds/waf-front-door-exclusion.md)
active-directory-b2c Security Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/security-architecture.md
Your identity solution uses multiple components to provide a smooth sign in expe
|Component |Endpoint|Why|How to protect| |-|-|-|-|
-|Azure AD B2C authentication endpoints|`/authorize`, `/token`, `/.well-known/openid-configuration`, `/discovery/v2.0/keys`|Prevent resource exhaustion|[Web Application Firewall (WAF)](./partner-azure-web-application-firewall.md) and [Azure Front Door (AFD)](https://azure.microsoft.com/products/frontdoor/?ef_id=_k_53b0ace78faa14e3c3b1c8b385bf944d_k_&OCID=AIDcmm5edswduu_SEM__k_53b0ace78faa14e3c3b1c8b385bf944d_k_&msclkid=53b0ace78faa14e3c3b1c8b385bf944d)|
+|Azure AD B2C authentication endpoints|`/authorize`, `/token`, `/.well-known/openid-configuration`, `/discovery/v2.0/keys`|Prevent resource exhaustion|[Web Application Firewall (WAF)](./partner-web-application-firewall.md) and [Azure Front Door (AFD)](https://azure.microsoft.com/products/frontdoor/?ef_id=_k_53b0ace78faa14e3c3b1c8b385bf944d_k_&OCID=AIDcmm5edswduu_SEM__k_53b0ace78faa14e3c3b1c8b385bf944d_k_&msclkid=53b0ace78faa14e3c3b1c8b385bf944d)|
|Sign-in|NA|Malicious sign-in's may try to brute force accounts or use leaked credentials|[Identity Protection](/azure/active-directory/identity-protection/overview-identity-protection)| |Sign-up|NA|Fraudulent sign-up's that may try to exhaust resources.|[Endpoint protection](https://www.microsoft.com/security/business/endpoint-security/microsoft-defender-business-b?ef_id=_k_22063a2ad7b719a498ec5e7edc5d6500_k_&OCID=AIDcmm7ol8ekjr_SEM__k_22063a2ad7b719a498ec5e7edc5d6500_k_&msclkid=22063a2ad7b719a498ec5e7edc5d6500)<br> Fraud prevention technologies, such as [Dynamics Fraud Protection](./partner-dynamics-365-fraud-protection.md)| |Email OTP|NA|Fraudulent attempts to brute force or exhaust resources|[Endpoint protection](https://www.microsoft.com/security/business/endpoint-security/microsoft-defender-business-b?ef_id=_k_22063a2ad7b719a498ec5e7edc5d6500_k_&OCID=AIDcmm7ol8ekjr_SEM__k_22063a2ad7b719a498ec5e7edc5d6500_k_&msclkid=22063a2ad7b719a498ec5e7edc5d6500) and [Authenticator App](/azure/active-directory/authentication/concept-authentication-authenticator-app)| |Multifactor authentication controls|NA|Unsolicited phone calls or SMS messages or resource exhaustion.|[Endpoint protection](https://www.microsoft.com/security/business/endpoint-security/microsoft-defender-business-b?ef_id=_k_22063a2ad7b719a498ec5e7edc5d6500_k_&OCID=AIDcmm7ol8ekjr_SEM__k_22063a2ad7b719a498ec5e7edc5d6500_k_&msclkid=22063a2ad7b719a498ec5e7edc5d6500) and [Authenticator App](/azure/active-directory/authentication/concept-authentication-authenticator-app)|
-|External REST APIs|Your REST API endpoints|Malicious usage of user flows or custom policies can lead to resource exhaustion at your API endpoints.|[WAF](./partner-azure-web-application-firewall.md) and [AFD](https://azure.microsoft.com/products/frontdoor/?ef_id=_k_921daffd3bd81af80dd9cba9348858c4_k_&OCID=AIDcmm5edswduu_SEM__k_921daffd3bd81af80dd9cba9348858c4_k_&msclkid=921daffd3bd81af80dd9cba9348858c4)|
+|External REST APIs|Your REST API endpoints|Malicious usage of user flows or custom policies can lead to resource exhaustion at your API endpoints.|[WAF](./partner-web-application-firewall.md) and [AFD](https://azure.microsoft.com/products/frontdoor/?ef_id=_k_921daffd3bd81af80dd9cba9348858c4_k_&OCID=AIDcmm5edswduu_SEM__k_921daffd3bd81af80dd9cba9348858c4_k_&msclkid=921daffd3bd81af80dd9cba9348858c4)|
### Protection mechanisms
The following table provides an overview of the different protection mechanisms
|Identity Protection|Identity Protection provides ongoing risk detection. When a risk is detected during sign-in, you can configure Azure AD B2C conditional policy to allow the user to remediate the risk before proceeding with the sign-in. Administrators can also use identity protection reports to review risky users who are at risk and review detection details. The risk detections report includes information about each risk detection, such as its type and the location of the sign-in attempt, and more. Administrators can also confirm or deny that the user is compromised.|<ul><li>[Investigate risk with Identity Protection](./identity-protection-investigate-risk.md)</li><ul> | |Conditional Access (CA)|When a user attempts to sign in, CA gathers various signals such as risks from identity protection, to make decisions and enforce organizational policies. CA can assist administrators to develop policies that are consistent with their organization's security posture. The policies can include the ability to completely block user access or provide access after the user has completed another authentication like MFA.|<ul><li>[Add Conditional Access policies to user flows](./conditional-access-user-flow.md)</li></ul>| |Multifactor authentication (MFA)|MFA adds a second layer of security to the sign-up and sign-in process and is an essential component of improving the security posture of user authentication in Azure AD B2C. The Authenticator app - TOTP is the recommended MFA method in Azure AD B2C. | <ul><li>[Enable multifactor authentication](./multi-factor-authentication.md)</li></ul> |
-|Security Information and Event management (SIEM)/ Security Orchestration, Automation and Response (SOAR) |You need a reliable monitoring and alerting system for analyzing usage patterns such as sign-ins and sign-ups, and detect any anomalous behavior that may be indicative of a cyberattack. It's an important step that adds an extra layer of security. It also you to understand patterns and trends that can only be captured and built upon over time. Alerting assists in determining factors such as the rate of change in overall sign-ins, an increase in failed sign-ins, and failed sign-up journeys, phone-based frauds such as IRSF attacks, and so on. All of these can be indicators of an ongoing cyberattack that requires immediate attention. Azure AD B2C supports both high level and fine grain logging, as well as the generation of reports and alerts. It's advised that you implement monitoring and alerting in all production tenants. | <ul><li>[Monitor using Azure Monitor](./azure-monitor.md)</li><li>[ Use reports & alerts](https://github.com/azure-ad-b2c/siem)</li><li> [Monitor for fraudulent MFA usage](./phone-based-mfa.md)</li><li>[Collect Azure AD B2C logs with Application Insights](troubleshoot-with-application-insights.md?pivots=b2c-user-flow)</li><li>[Configure security analytics for Azure AD B2C data with Microsoft Sentinel](./azure-sentinel.md)</li></ul>|
+|Security Information and Event management (SIEM)/ Security Orchestration, Automation and Response (SOAR) |You need a reliable monitoring and alerting system for analyzing usage patterns such as sign-ins and sign-ups, and detect any anomalous behavior that may be indicative of a cyberattack. It's an important step that adds an extra layer of security. It also you to understand patterns and trends that can only be captured and built upon over time. Alerting assists in determining factors such as the rate of change in overall sign-ins, an increase in failed sign-ins, and failed sign-up journeys, phone-based frauds such as IRSF attacks, and so on. All of these can be indicators of an ongoing cyberattack that requires immediate attention. Azure AD B2C supports both high level and fine grain logging, as well as the generation of reports and alerts. It's advised that you implement monitoring and alerting in all production tenants. | <ul><li>[Monitor using Azure Monitor](./azure-monitor.md)</li><li>[Use reports & alerts](https://github.com/azure-ad-b2c/siem)</li><li> [Monitor for fraudulent MFA usage](./phone-based-mfa.md)</li><li>[Collect Azure AD B2C logs with Application Insights](troubleshoot-with-application-insights.md?pivots=b2c-user-flow)</li><li>[Configure security analytics for Azure AD B2C data with Microsoft Sentinel](./configure-security-analytics-sentinel.md)</li></ul>|
[![Screenshot shows Azure AD B2C security architecture diagram.](./media/security-architecture/security-architecture-high-level.png)](./media/security-architecture/security-architecture-high-level.png#lightbox)
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Tutorial: Configure IDEMIA Mobile ID with Azure Active Directory B2C](partner-idemia.md) - [Configure Azure Active Directory B2C with Bluink eID-Me for identity verification](partner-eid-me.md) - [Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication](partner-bloksec.md)-- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-azure-web-application-firewall.md)
+- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-web-application-firewall.md)
- [Tutorial to configure Saviynt with Azure Active Directory B2C](partner-saviynt.md) - [Tutorial: Configure Keyless with Azure Active Directory B2C](partner-keyless.md)-- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](azure-sentinel.md)
+- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](configure-security-analytics-sentinel.md)
- [Configure authentication in a sample Python web app by using Azure AD B2C](configure-authentication-sample-python-web-app.md) - [Billing model for Azure Active Directory B2C](billing.md) - [Azure Active Directory B2C: Region availability & data residency](data-residency.md)
active-directory Application Proxy Register Connector Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-register-connector-powershell.md
There are two methods you can use to register the connector:
* Register the connector using a token created offline ### Register the connector using a Windows PowerShell credential object
-1. Create a Windows PowerShell Credentials object `$cred` that contains an administrative username and password for your directory. Run the following command, replacing *\<username\>* and *\<password\>*:
+1. Create a Windows PowerShell Credentials object `$cred` that contains an administrative username and password for your directory. Run the following command, replacing *\<username\>* , *\<password\>* and *\<tenantid\>*:
```powershell $User = "<username>" $PlainPassword = '<password>'
+ $TenantId = '<tenantid>'
$SecurePassword = $PlainPassword | ConvertTo-SecureString -AsPlainText -Force $cred = New-Object ΓÇôTypeName System.Management.Automation.PSCredential ΓÇôArgumentList $User, $SecurePassword ``` 2. Go to **C:\Program Files\Microsoft AAD App Proxy Connector** and run the following script using the `$cred` object that you created: ```powershell
- .\RegisterConnector.ps1 -modulePath "C:\Program Files\Microsoft AAD App Proxy Connector\Modules\" -moduleName "AppProxyPSModule" -Authenticationmode Credentials -Usercredentials $cred -Feature ApplicationProxy
+ .\RegisterConnector.ps1 -modulePath "C:\Program Files\Microsoft AAD App Proxy Connector\Modules\" -moduleName "AppProxyPSModule" -Authenticationmode Credentials -Usercredentials $cred -Feature ApplicationProxy -TenantId $TenantId
``` ### Register the connector using a token created offline
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
Here are some known issues:
* SMS-based authentication isn't currently compatible with Azure AD Multi-Factor Authentication. * Except for Teams, SMS-based authentication isn't compatible with native Office applications.
-* SMS-based authentication isn't recommended for B2B accounts.
+* SMS-based authentication isn't supported for B2B accounts.
* Federated users won't authenticate in the home tenant. They only authenticate in the cloud. * If a user's default sign-in method is a text or call to your phone number, then the SMS code or voice call is sent automatically during multifactor authentication. As of June 2021, some apps will ask users to choose **Text** or **Call** first. This option prevents sending too many security codes for different apps. If the default sign-in method is the Microsoft Authenticator app ([which we highly recommend](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752)), then the app notification is sent automatically. * SMS-based authentication has reached general availability, and we're working to remove the **(Preview)** label in the Azure portal.
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Title: Claims mapping policy
-description: Learn about the claims mapping policy type, which is used to modify the claims emitted in tokens issued for specific applications.
+ Title: Claims mapping policy type
+description: Learn about the claims mapping policy type, which is used to modify the claims emitted in tokens in the Microsoft identity platform.
-+ Previously updated : 01/06/2023 Last updated : 06/02/2023 # Claims mapping policy type
-In Azure AD, a **Policy** object represents a set of rules enforced on individual applications or on all applications in an organization. Each type of policy has a unique structure, with a set of properties that are then applied to objects to which they're assigned.
+A policy object represents a set of rules enforced on individual applications or on all applications in an organization. Each type of policy has a unique structure, with a set of properties that are then applied to objects to which they're assigned.
-A claims mapping policy is a type of **Policy** object that [modifies the claims emitted in tokens](active-directory-claims-mapping.md) issued for specific applications.
+A claims mapping policy is a type of policy object that modifies the claims included in tokens. For more information, see [Customize claims issued in the SAML token for enterprise applications](saml-claims-customization.md).
## Claim sets
-There are certain sets of claims that define how and when they're used in tokens.
+The following table lists the sets of claims that define how and when they're used in tokens.
| Claim set | Description |
-|||
-| Core claim set | Are present in every token regardless of the policy. These claims are also considered restricted, and can't be modified. |
-| Basic claim set | Includes the claims that are emitted by default for tokens (in addition to the core claim set). You can [omit or modify basic claims](active-directory-claims-mapping.md#omit-the-basic-claims-from-tokens) by using the claims mapping policies. |
-| Restricted claim set | Can't be modified using policy. The data source can't be changed, and no transformation is applied when generating these claims. |
-
-This section lists:
-- [Table 1: JSON Web Token (JWT) restricted claim set](#table-1-json-web-token-jwt-restricted-claim-set)-- [Table 2: SAML restricted claim set](#table-2-saml-restricted-claim-set)-
-### Table 1: JSON Web Token (JWT) restricted claim set
+|--|-|
+| Core claim set | Present in every token regardless of the policy. These claims are also considered restricted, and can't be modified. |
+| Basic claim set | Includes the claims that are included by default for tokens in addition to the core claim set. You can omit or modify basic claims by using the claims mapping policies. |
+| Restricted claim set | Can't be modified using a policy. The data source can't be changed, and no transformation is applied when generating these claims. |
+
+### JSON Web Token (JWT) restricted claim set
+
+The following claims are in the restricted claim set for a JWT.
+
+- `.`
+- `_claim_names`
+- `_claim_sources`
+- `aai`
+- `access_token`
+- `account_type`
+- `acct`
+- `acr`
+- `acrs`
+- `actor`
+- `ageGroup`
+- `aio`
+- `altsecid`
+- `amr`
+- `app_chain`
+- `app_displayname`
+- `app_res`
+- `appctx`
+- `appctxsender`
+- `appid`
+- `appidacr`
+- `at_hash`
+- `auth_time`
+- `azp`
+- `azpacr`
+- `c_hash`
+- `ca_enf`
+- `ca_policy_result`
+- `capolids_latebind`
+- `capolids`
+- `cc`
+- `cnf`
+- `code`
+- `controls_auds`
+- `controls`
+- `credential_keys`
+- `ctry`
+- `deviceid`
+- `domain_dns_name`
+- `domain_netbios_name`
+- `e_exp`
+- `email`
+- `endpoint`
+- `enfpolids`
+- `expires_on`
+- `fido_auth_data`
+- `fwd_appidacr`
+- `fwd`
+- `graph`
+- `group_sids`
+- `groups`
+- `hasgroups`
+- `haswids`
+- `home_oid`
+- `home_puid`
+- `home_tid`
+- `identityprovider`
+- `idp`
+- `idtyp`
+- `in_corp`
+- `instance`
+- `inviteTicket`
+- `ipaddr`
+- `isbrowserhostedapp`
+- `isViral`
+- `login_hint`
+- `mam_compliance_url`
+- `mam_enrollment_url`
+- `mam_terms_of_use_url`
+- `mdm_compliance_url`
+- `mdm_enrollment_url`
+- `mdm_terms_of_use_url`
+- `msproxy`
+- `nameid`
+- `nickname`
+- `nonce`
+- `oid`
+- `on_prem_id`
+- `onprem_sam_account_name`
+- `onprem_sid`
+- `openid2_id`
+- `origin_header`
+- `platf`
+- `polids`
+- `pop_jwk`
+- `preferred_username`
+- `primary_sid`
+- `prov_data`
+- `puid`
+- `pwd_exp`
+- `pwd_url`
+- `rdp_bt`
+- `refresh_token_issued_on`
+- `refreshtoken`
+- `rh`
+- `roles`
+- `rt_type`
+- `scp`
+- `secaud`
+- `sid`
+- `sid`
+- `signin_state`
+- `source_anchor`
+- `src1`
+- `src2`
+- `sub`
+- `target_deviceid`
+- `tbid`
+- `tbidv2`
+- `tenant_ctry`
+- `tenant_display_name`
+- `tenant_region_scope`
+- `tenant_region_sub_scope`
+- `thumbnail_photo`
+- `tid`
+- `tokenAutologonEnabled`
+- `trustedfordelegation`
+- `ttr`
+- `unique_name`
+- `upn`
+- `user_setting_sync_url`
+- `uti`
+- `ver`
+- `verified_primary_email`
+- `verified_secondary_email`
+- `vnet`
+- `wamcompat_client_info`
+- `wamcompat_id_token`
+- `wamcompat_scopes`
+- `wids`
+- `xcb2b_rclient`
+- `xcb2b_rcloud`
+- `xcb2b_rtenant`
+- `ztdid`
> [!NOTE]
-> Any claim starting with "xms_" is restricted.
+> Any claim starting with `xms_` is restricted.
-| Claim type (name) |
-| -- |
-|.|
-|_claim_names|
-|_claim_sources|
-|aai|
-|access_token|
-|account_type|
-|acct|
-|acr|
-|acrs|
-|actor|
-|ageGroup|
-|aio|
-|altsecid|
-|amr|
-|app_chain|
-|app_displayname|
-|app_res|
-|appctx|
-|appctxsender|
-|appid|
-|appidacr|
-|at_hash|
-|auth_time|
-|azp|
-|azpacr|
-|c_hash|
-|ca_enf|
-|ca_policy_result|
-|capolids_latebind|
-|capolids|
-|cc|
-|cnf|
-|code|
-|controls_auds|
-|controls|
-|credential_keys|
-|ctry|
-|deviceid|
-|domain_dns_name|
-|domain_netbios_name|
-|e_exp|
-|email|
-|endpoint|
-|enfpolids|
-|expires_on|
-|fido_auth_data|
-|fwd_appidacr|
-|fwd|
-|graph|
-|group_sids|
-|groups|
-|hasgroups|
-|haswids|
-|home_oid|
-|home_puid|
-|home_tid|
-|identityprovider|
-|idp|
-|idtyp|
-|in_corp|
-|instance|
-|inviteTicket|
-|ipaddr|
-|isbrowserhostedapp|
-|isViral|
-|login_hint|
-|mam_compliance_url|
-|mam_enrollment_url|
-|mam_terms_of_use_url|
-|mdm_compliance_url|
-|mdm_enrollment_url|
-|mdm_terms_of_use_url|
-|msproxy|
-|nameid|
-|nickname|
-|nonce|
-|oid|
-|on_prem_id|
-|onprem_sam_account_name|
-|onprem_sid|
-|openid2_id|
-|origin_header|
-|platf|
-|polids|
-|pop_jwk|
-|preferred_username|
-|primary_sid|
-|prov_data|
-|puid|
-|pwd_exp|
-|pwd_url|
-|rdp_bt|
-|refresh_token_issued_on|
-|refreshtoken|
-|rh|
-|roles|
-|rt_type|
-|scp|
-|secaud|
-|sid|
-|sid|
-|signin_state|
-|source_anchor|
-|src1|
-|src2|
-|sub|
-|target_deviceid|
-|tbid|
-|tbidv2|
-|tenant_ctry|
-|tenant_display_name|
-|tenant_region_scope|
-|tenant_region_sub_scope|
-|thumbnail_photo|
-|tid|
-|tokenAutologonEnabled|
-|trustedfordelegation|
-|ttr|
-|unique_name|
-|upn|
-|user_setting_sync_url|
-|uti|
-|ver|
-|verified_primary_email|
-|verified_secondary_email|
-|vnet|
-|wamcompat_client_info|
-|wamcompat_id_token|
-|wamcompat_scopes|
-|wids|
-|xcb2b_rclient|
-|xcb2b_rcloud|
-|xcb2b_rtenant|
-|ztdid|
-
-### Table 2: SAML restricted claim set
-
-The following table lists the SAML claims that are by default in the restricted claim set.
+### SAML restricted claim set
+
+The following table lists the SAML claims that are in the restricted claim set.
| Claim type (URI) | | -- |
The following table lists the SAML claims that are by default in the restricted
| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` | | `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` |
-These claims are restricted by default, but aren't restricted if you [set the AcceptMappedClaims property](active-directory-claims-mapping.md#update-the-application-manifest) to `true` in your app manifest *or* have a [custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key):
+These claims are restricted by default, but aren't restricted if you [set the AcceptMappedClaims property](saml-claims-customization.md) to `true` in your app manifest *or* have a [custom signing key](saml-claims-customization.md):
- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid`
These claims are restricted by default, but aren't restricted if you [set the Ac
- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid` - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname`
-These claims are restricted by default, but aren't restricted if you have a [custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key):
+These claims are restricted by default, but aren't restricted if you have a [custom signing key](saml-claims-customization.md):
- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` ## Claims mapping policy properties
-To control what claims are emitted and where the data comes from, use the properties of a claims mapping policy. If a policy isn't set, the system issues tokens that include the core claim set, the basic claim set, and any [optional claims](active-directory-optional-claims.md) that the application has chosen to receive.
+To control the claims that are included and where the data comes from, use the properties of a claims mapping policy. Without a policy, the system issues tokens with the following claims:
+- The core claim set.
+- The basic claim set.
+- Any [optional claims](active-directory-optional-claims.md) that the application has chosen to receive.
> [!NOTE] > Claims in the core claim set are present in every token, regardless of what this property is set to.
-### Include basic claim set
-
-**String:** IncludeBasicClaimSet
-
-**Data type:** Boolean (True or False)
-
-**Summary:** This property determines whether the basic claim set is included in tokens affected by this policy.
--- If set to True, all claims in the basic claim set are emitted in tokens affected by the policy.-- If set to False, claims in the basic claim set aren't in the tokens, unless they're individually added in the claims schema property of the same policy.---
-### Claims schema
-
-**String:** ClaimsSchema
-
-**Data type:** JSON blob with one or more claim schema entries
-
-**Summary:** This property defines which claims are present in the tokens affected by the policy, in addition to the basic claim set and the core claim set.
-For each claim schema entry defined in this property, certain information is required. Specify where the data is coming from (**Value**, **Source/ID pair**, or **Source/ExtensionID pair**), and which claim the data is emitted as (**Claim Type**).
+| String | Data type | Summary |
+| | | - |
+| **IncludeBasicClaimSet** | Boolean (True or False) | Determines whether the basic claim set is included in tokens affected by this policy. If set to True, all claims in the basic claim set are emitted in tokens affected by the policy. If set to False, claims in the basic claim set aren't in the tokens, unless they're individually added in the claims schema property of the same policy. |
+| **ClaimsSchema** | JSON blob with one or more claim schema entries | Defines which claims are present in the tokens affected by the policy, in addition to the basic claim set and the core claim set. For each claim schema entry defined in this property, certain information is required. Specify where the data is coming from (**Value**, **Source/ID pair**, or **Source/ExtensionID pair**), and **Claim Type**, which is emitted as (**JWTClaimType** or **SamlClaimType**).
### Claim schema entry elements
-**Value:** The Value element defines a static value as the data to be emitted in the claim.
-
-**SAMLNameFormat:** The SAML Name Format property specifies the value for the ΓÇ£NameFormatΓÇ¥ attribute for this claim. If present, the allowed values are:
-- urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified -- urn:oasis:names:tc:SAML:2.0:attrname-format:uri -- urn:oasis:names:tc:SAML:2.0:attrname-format:basic -
-**Source/ID pair:** The Source and ID elements define where the data in the claim is sourced from.
-
-**Source/ExtensionID pair:** The Source and ExtensionID elements define the directory extension attribute where the data in the claim is sourced from. For more information, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
-
-Set the Source element to one of the following values:
--- "user": The data in the claim is a property on the User object.-- "application": The data in the claim is a property on the application (client) service principal.-- "resource": The data in the claim is a property on the resource service principal.-- "audience": The data in the claim is a property on the service principal that is the audience of the token (either the client or resource service principal).-- "company": The data in the claim is a property on the resource tenant's Company object.-- "transformation": The data in the claim is from claims transformation (see the "Claims transformation" section later in this article).-
-If the source is transformation, the **TransformationID** element must be included in this claim definition as well.
-
-The ID element identifies which property on the source provides the value for the claim. The following table lists the values of ID valid for each value of Source.
--
-> [!WARNING]
-> Currently, the only available multi-valued claim sources on a user object are multi-valued extension attributes which have been synced from AADConnect. Other properties, such as OtherMails and tags, are multi-valued but only one value is emitted when selected as a source.
-
-#### Table 3: Valid ID values per source
+- **Value** - Defines a static value as the data to be emitted in the claim.
+- **SAMLNameForm** - Defines the value for the NameFormat attribute for this claim. If present, the allowed values are:
+ - `urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified`
+ - `urn:oasis:names:tc:SAML:2.0:attrname-format:uri`
+ - `urn:oasis:names:tc:SAML:2.0:attrname-format:basic`
+- **Source/ID pair** - Defines where the data in the claim is sourced from.
+- **Source/ExtensionID pair** - Defines the directory extension attribute where the data in the claim is sourced from. For more information, see [Using directory extension attributes in claims](schema-extensions.md).
+- **Claim Type** - The **JwtClaimType** and **SamlClaimType** elements define which claim this claim schema entry refers to.
+ - The **JwtClaimType** must contain the name of the claim to be emitted in JWTs.
+ - The **SamlClaimType** must contain the URI of the claim to be emitted in SAML tokens.
+
+Set the **Source** element to one of the values in the following table.
+
+| Source value | Data in claim |
+| | - |
+| `user` | Property on the User object. |
+| `application` | Property on the application (client) service principal. |
+| `resource` | Property on the resource service principal. |
+| `audience` | Property on the service principal that is the audience of the token (either the client or resource service principal). |
+| `company` | Property on the resource tenant's Company object. |
+| `transformation` | Claims transformation. When you use this claim, the **TransformationID** element must be included in the claim definition. The **TransformationID** element must match the ID element of the transformation entry in the **ClaimsTransformation** property that defines how the data for the claim is generated. |
+
+The ID element identifies the property on the source that provides the value for the claim. The following table lists the values of the ID element for each value of Source.
| Source | ID | Description |
-|--|--|--|
-| User | surname | Family Name |
-| User | givenname | Given Name |
-| User | displayname | Display Name |
-| User | objectid | ObjectID |
-| User | mail | Email Address |
-| User | userprincipalname | User Principal Name |
-| User | department|Department|
-| User | onpremisessamaccountname | On-premises SAM Account Name |
-| User | netbiosname| NetBios Name |
-| User | dnsdomainname | DNS Domain Name |
-| User | onpremisesecurityidentifier | On-premises Security Identifier |
-| User | companyname| Organization Name |
-| User | streetaddress | Street Address |
-| User | postalcode | Postal Code |
-| User | preferredlanguage | Preferred Language |
-| User | onpremisesuserprincipalname | On-premises UPN |
-| User | mailnickname | Mail Nickname |
-| User | extensionattribute1 | Extension Attribute 1 |
-| User | extensionattribute2 | Extension Attribute 2 |
-| User | extensionattribute3 | Extension Attribute 3 |
-| User | extensionattribute4 | Extension Attribute 4 |
-| User | extensionattribute5 | Extension Attribute 5 |
-| User | extensionattribute6 | Extension Attribute 6 |
-| User | extensionattribute7 | Extension Attribute 7 |
-| User | extensionattribute8 | Extension Attribute 8 |
-| User | extensionattribute9 | Extension Attribute 9 |
-| User | extensionattribute10 | Extension Attribute 10 |
-| User | extensionattribute11 | Extension Attribute 11 |
-| User | extensionattribute12 | Extension Attribute 12 |
-| User | extensionattribute13 | Extension Attribute 13 |
-| User | extensionattribute14 | Extension Attribute 14 |
-| User | extensionattribute15 | Extension Attribute 15 |
-| User | othermail | Other Mail |
-| User | country | Country/Region |
-| User | city | City |
-| User | state | State |
-| User | jobtitle | Job Title |
-| User | employeeid | Employee ID |
-| User | facsimiletelephonenumber | Facsimile Telephone Number |
-| User | assignedroles | list of App roles assigned to user|
-| User | accountEnabled | Account Enabled |
-| User | consentprovidedforminor | Consent Provided For Minor |
-| User | createddatetime | Created Date/Time|
-| User | creationtype | Creation Type |
-| User | lastpasswordchangedatetime | Last Password Change Date/Time |
-| User | mobilephone | Mobile Phone |
-| User | officelocation | Office Location |
-| User | onpremisesdomainname | On-premises Domain Name |
-| User | onpremisesimmutableid | On-premises Immutable ID |
-| User | onpremisessyncenabled | On-premises Sync Enabled |
-| User | preferreddatalocation | Preferred Data Location |
-| User | proxyaddresses | Proxy Addresses |
-| User | usertype | User Type |
-| User | telephonenumber| Business Phones / Office Phones |
-| application, resource, audience | displayname | Display Name |
-| application, resource, audience | objectid | ObjectID |
-| application, resource, audience | tags | Service Principal Tag |
-| Company | tenantcountry | Tenant's country/region |
-
-**TransformationID:** The TransformationID element must be provided only if the Source element is set to "transformation".
--- This element must match the ID element of the transformation entry in the **ClaimsTransformation** property that defines how the data for this claim is generated.-
-**Claim Type:** The **JwtClaimType** and **SamlClaimType** elements define which claim this claim schema entry refers to.
--- The JwtClaimType must contain the name of the claim to be emitted in JWTs.-- The SamlClaimType must contain the URI of the claim to be emitted in SAML tokens.-
-* **onPremisesUserPrincipalName attribute:** When using an Alternate ID, the on-premises attribute userPrincipalName is synchronized with the Azure AD attribute onPremisesUserPrincipalName. This attribute is only available when Alternate ID is configured.
-
-> [!NOTE]
-> Names and URIs of claims in the restricted claim set cannot be used for the claim type elements. For more information, see the "Exceptions and restrictions" section later in this article.
+|--|-|-|
+| `user` | `surname` | The family name of the user. |
+| `user` | `givenname` | The given name of the user. |
+| `user` | `displayname` | The display name of the user. |
+| `user` | `objectid` | The object ID of the user. |
+| `user` | `mail` | The email address of the user. |
+| `user` | `userprincipalname` | The user principal name of the user. |
+| `user` | `department` | The department of the user. |
+| `user` | `onpremisessamaccountname` | The on-premises SAM account name of the user. |
+| `user` | `netbiosname` | The NetBios name of the user. |
+| `user` | `dnsdomainname` | The DNS domain name of the user. |
+| `user` | `onpremisesecurityidentifier` | The on-premises security identifier of the user. |
+| `user` | `companyname` | The organization name of the user. |
+| `user` | `streetaddress` | The street address of the user. |
+| `user` | `postalcode` | The postal code of the user.|
+| `user` | `preferredlanguage` | The preferred language of the user. |
+| `user` | `onpremisesuserprincipalname` | The on-premises UPN of the user. When you use an alternate ID, the on-premises attribute `userPrincipalName` is synchronized with the `onPremisesUserPrincipalName` attribute. This attribute is only available when Alternate ID is configured.|
+| `user` | `mailnickname` | The mail nickname of the user. |
+| `user` | `extensionattribute1` | Extension attribute 1. |
+| `user` | `extensionattribute2` | Extension attribute 2. |
+| `user` | `extensionattribute3` | Extension attribute 3. |
+| `user` | `extensionattribute4` | Extension attribute 4. |
+| `user` | `extensionattribute5` | Extension attribute 5. |
+| `user` | `extensionattribute6` | Extension attribute 6. |
+| `user` | `extensionattribute7` | Extension attribute 7. |
+| `user` | `extensionattribute8` | Extension attribute 8. |
+| `user` | `extensionattribute9` | Extension attribute 9. |
+| `user` | `extensionattribute10` | Extension attribute 10. |
+| `user` | `extensionattribute11` | Extension attribute 11. |
+| `user` | `extensionattribute12` | Extension attribute 12. |
+| `user` | `extensionattribute13` | Extension attribute 13. |
+| `user` | `extensionattribute14` | Extension attribute 14. |
+| `user` | `extensionattribute15` | Extension attribute 15. |
+| `user` | `othermail` | The other mail of the user.|
+| `user` | `country` | The country/region of the user. |
+| `user` | `city` | The city of the user. |
+| `user` | `state` | The state of the user. |
+| `user` | `jobtitle` | The job title of the user. |
+| `user` | `employeeid` | The employee ID of the user. |
+| `user` | `facsimiletelephonenumber` | The facsimile telephone number of the user. |
+| `user` | `assignedroles` | The list of app roles assigned to the user. |
+| `user` | `accountEnabled` | Indicates whether the user account is enabled. |
+| `user` | `consentprovidedforminor` | Indicates whether consent was provided for a minor. |
+| `user` | `createddatetime` | The date and time that the user account was created. |
+| `user` | `creationtype` | Indicates how the user account was creation. |
+| `user` | `lastpasswordchangedatetime` | The last date and time that the password was changed. |
+| `user` | `mobilephone` | The mobile phone of the user. |
+| `user` | `officelocation` | The office location of the user. |
+| `user` | `onpremisesdomainname` | The on-premises domain name of the user. |
+| `user` | `onpremisesimmutableid` | The on-premises immutable ID of the user. |
+| `user` | `onpremisessyncenabled` | Indicates whether on-premises sync is enabled. |
+| `user` | `preferreddatalocation` | Defines the preferred data location of the user. |
+| `user` | `proxyaddresses` | The proxy addresses of the user. |
+| `user` | `usertype` | The type of user account. |
+| `user` | `telephonenumber` | The business or office phones of the user. |
+| `application`, `resource`, `audience` | `displayname` | The display name of the object. |
+| `application`, `resource`, `audience` | `objectid` | The ID of the object. |
+| `application`, `resource`, `audience` | `tags` | The service principal tag of the object. |
+| `company` | `tenantcountry` | The country/region of the tenant. |
+
+The only available multi-valued claim sources on a user object are multi-valued extension attributes that have been synced from Active Directory Connect. Other properties, such as `othermails` and `tags`, are multi-valued but only one value is emitted when selected as a source.
+
+Names and URIs of claims in the restricted claim set can't be used for the claim type elements.
### Group Filter
-**String:** GroupFilter
-
-**Data type:** JSON blob
-
-**Summary:** Use this property to apply a filter on the userΓÇÖs groups to be included in the group claim. This can be a useful means of reducing the token size.
-
-**MatchOn:** The **MatchOn** property identifies the group attribute on which to apply the filter.
-
-Set the **MatchOn** property to one of the following values:
--- "displayname": The group display name.-- "samaccountname": The On-premises SAM Account Name-
-**Type:** The **Type** property selects the type of filter you wish to apply to the attribute selected by the **MatchOn** property.
-
-Set the **Type** property to one of the following values:
--- "prefix": Include groups where the **MatchOn** property starts with the provided **Value** property.-- "suffix": Include groups where the **MatchOn** property ends with the provided **Value** property.-- "contains": Include groups where the **MatchOn** property contains with the provided **Value** property.
+- **String** - GroupFilter
+- **Data type:** - JSON blob
+- **Summary** - Use this property to apply a filter on the userΓÇÖs groups to be included in the group claim. This property can be a useful means of reducing the token size.
+- **MatchOn:** - Identifies the group attribute on which to apply the filter. Set the **MatchOn** property to one of the following values:
+ - `displayname` - The group display name.
+ - `samaccountname` - The on-premises SAM account name.
+- **Type** - Defines the type of filter applied to the attribute selected by the **MatchOn** property. Set the **Type** property to one of the following values:
+ - `prefix` - Include groups where the **MatchOn** property starts with the provided **Value** property.
+ - `suffix` Include groups where the **MatchOn** property ends with the provided **Value** property.
+ - `contains` - Include groups where the **MatchOn** property contains with the provided **Value** property.
### Claims transformation
-**String:** ClaimsTransformation
-
-**Data type:** JSON blob, with one or more transformation entries
-
-**Summary:** Use this property to apply common transformations to source data, to generate the output data for claims specified in the Claims Schema.
-
-**ID:** Use the ID element to reference this transformation entry in the TransformationID Claims Schema entry. This value must be unique for each transformation entry within this policy.
-
-**TransformationMethod:** The TransformationMethod element identifies which operation is performed to generate the data for the claim.
+- **String** - ClaimsTransformation
+- **Data type** - JSON blob, with one or more transformation entries
+- **Summary** - Use this property to apply common transformations to source data to generate the output data for claims specified in the Claims Schema.
+- **ID** - References the transformation entry in the TransformationID Claims Schema entry. This value must be unique for each transformation entry within this policy.
+- **TransformationMethod** - Identifies the operation that's performed to generate the data for the claim.
Based on the method chosen, a set of inputs and outputs is expected. Define the inputs and outputs by using the **InputClaims**, **InputParameters** and **OutputClaims** elements.
-#### Table 4: Transformation methods and expected inputs and outputs
-
-|TransformationMethod|Expected input|Expected output|Description|
-|--|--|--|--|
-|Join|string1, string2, separator|outputClaim|Joins input strings by using a separator in between. For example: string1:"foo@bar.com" , string2:"sandbox" , separator:"." results in outputClaim:"foo@bar.com.sandbox"|
-|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other directory extensions, which are storing a UPN or email address value for the user, for example, johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
-
-**InputClaims:** Use an InputClaims element to pass the data from a claim schema entry to a transformation. It has three attributes: **ClaimTypeReferenceId**, **TransformationClaimType** and **TreatAsMultiValue**
--- **ClaimTypeReferenceId** is joined with ID element of the claim schema entry to find the appropriate input claim.-- **TransformationClaimType** is used to give a unique name to this input. This name must match one of the expected inputs for the transformation method.-- **TreatAsMultiValue** is a Boolean flag indicating if the transform should be applied to all values or just the first. By default, transformations will only be applied to the first element in a multi value claim, by setting this value to true it ensures it's applied to all. ProxyAddresses and groups are two examples for input claims that you would likely want to treat as a multi value. -
-**InputParameters:** Use an InputParameters element to pass a constant value to a transformation. It has two attributes: **Value** and **ID**.
--- **Value** is the actual constant value to be passed.-- **ID** is used to give a unique name to the input. The name must match one of the expected inputs for the transformation method.-
-**OutputClaims:** Use an OutputClaims element to hold the data generated by a transformation, and tie it to a claim schema entry. It has two attributes: **ClaimTypeReferenceId** and **TransformationClaimType**.
--- **ClaimTypeReferenceId** is joined with the ID of the claim schema entry to find the appropriate output claim.-- **TransformationClaimType** is used to give a unique name to the output. The name must match one of the expected outputs for the transformation method.
+| TransformationMethod | Expected input | Expected output | Description |
+|-|-|--|-|
+| **Join** | string1, string2, separator | output claim | Joins input strings by using a separator in between. For example, string1:`foo@bar.com` , string2:`sandbox` , separator:`.` results in output claim:`foo@bar.com.sandbox`. |
+| **ExtractMailPrefix** | Email or UPN | extracted string | Extension attributes 1-15 or any other directory extensions, which store a UPN or email address value for the user. For example, `johndoe@contoso.com`. Extracts the local part of an email address. For example, mail:`foo@bar.com` results in output claim:`foo`. If no \@ sign is present, then the original input string is returned. |
+
+- **InputClaims** - Used to pass the data from a claim schema entry to a transformation. It has three attributes: **ClaimTypeReferenceId**, **TransformationClaimType** and **TreatAsMultiValue**.
+ - **ClaimTypeReferenceId** - Joined with the ID element of the claim schema entry to find the appropriate input claim.
+ - **TransformationClaimType** Gives a unique name to this input. This name must match one of the expected inputs for the transformation method.
+ - **TreatAsMultiValue** is a Boolean flag that indicates whether the transform should be applied to all values or just the first. By default, transformations are only applied to the first element in a multi-value claim. Setting this value to true ensures it's applied to all. ProxyAddresses and groups are two examples for input claims that you would likely want to treat as a multi-value claim.
+- **InputParameters** - Passes a constant value to a transformation. It has two attributes: **Value** and **ID**.
+ - **Value** is the actual constant value to be passed.
+ - **ID** is used to give a unique name to the input. The name must match one of the expected inputs for the transformation method.
+- **OutputClaims** - Holds the data generated by a transformation, and ties it to a claim schema entry. It has two attributes: **ClaimTypeReferenceId** and **TransformationClaimType**.
+ - **ClaimTypeReferenceId** is joined with the ID of the claim schema entry to find the appropriate output claim.
+ - **TransformationClaimType** is used to give a unique name to the output. The name must match one of the expected outputs for the transformation method.
### Exceptions and restrictions
-**SAML NameID and UPN:** The attributes from which you source the NameID and UPN values, and the claims transformations that are permitted, are limited. See table 5 and table 6 to see the permitted values.
-
-#### Table 5: Attributes allowed as a data source for SAML NameID
-
-|Source|ID|Description|
-|--|--|--|
-| User | mail|Email Address|
-| User | userprincipalname|User Principal Name|
-| User | onpremisessamaccountname|On Premises Sam Account Name|
-| User | employeeid|Employee ID|
-| User | telephonenumber| Business Phones / Office Phones |
-| User | extensionattribute1 | Extension Attribute 1 |
-| User | extensionattribute2 | Extension Attribute 2 |
-| User | extensionattribute3 | Extension Attribute 3 |
-| User | extensionattribute4 | Extension Attribute 4 |
-| User | extensionattribute5 | Extension Attribute 5 |
-| User | extensionattribute6 | Extension Attribute 6 |
-| User | extensionattribute7 | Extension Attribute 7 |
-| User | extensionattribute8 | Extension Attribute 8 |
-| User | extensionattribute9 | Extension Attribute 9 |
-| User | extensionattribute10 | Extension Attribute 10 |
-| User | extensionattribute11 | Extension Attribute 11 |
-| User | extensionattribute12 | Extension Attribute 12 |
-| User | extensionattribute13 | Extension Attribute 13 |
-| User | extensionattribute14 | Extension Attribute 14 |
-| User | extensionattribute15 | Extension Attribute 15 |
-
-#### Table 6: Transformation methods allowed for SAML NameID
+**SAML NameID and UPN** - The attributes from which you source the NameID and UPN values, and the claims transformations that are permitted, are limited.
+
+| Source | ID | Description |
+|--|-|-|
+| `user` | `mail` | The email address of the user. |
+| `user` | `userprincipalname` | The user principal name of the user. |
+| `user` | `onpremisessamaccountname` | On-premises Sam Account Name|
+| `user` | `employeeid` | The employee ID of the user. |
+| `user` | `telephonenumber` | The business or office phones of the user. |
+| `user` | `extensionattribute1` | Extension attribute 1. |
+| `user` | `extensionattribute2` | Extension attribute 2. |
+| `user` | `extensionattribute3` | Extension attribute 3. |
+| `user` | `extensionattribute4` | Extension attribute 4. |
+| `user` | `extensionattribute5` | Extension attribute 5. |
+| `user` | `extensionattribute6` | Extension attribute 6. |
+| `user` | `extensionattribute7` | Extension attribute 7. |
+| `user` | `extensionattribute8` | Extension attribute 8. |
+| `user` | `extensionattribute9` | Extension Attribute 9. |
+| `user` | `extensionattribute10` | Extension attribute 10. |
+| `user` | `extensionattribute11` | Extension attribute 11. |
+| `user` | `extensionattribute12` | Extension attribute 12. |
+| `user` | `extensionattribute13` | Extension attribute 13. |
+| `user` | `extensionattribute14` | Extension attribute 14. |
+| `User` | `extensionattribute15` | Extension attribute 15. |
+
+The transformation methods listed in the following table are allowed for SAML NameID.
| TransformationMethod | Restrictions |
-| -- | -- |
+| -- | |
| ExtractMailPrefix | None | | Join | The suffix being joined must be a verified domain of the resource tenant. | ### Issuer With Application ID
-**String:** issuerWithApplicationId
-**Data type:** Boolean (True or False)
-**Summary:** This property enables the addition of the application ID to the issuer claim. Ensures that multiple instances of the same application have a unique claim value for each instance. This setting is ignored if a custom signing key isn't configured for the application.
-- If set to `True`, the application ID is added to the issuer claim in tokens affected by the policy.-- If set to `False`, the application ID isn't added to the issuer claim in tokens affected by the policy. (default)
-
+
+- **String** - issuerWithApplicationId
+- **Data type** - Boolean (True or False)
+ - If set to `True`, the application ID is added to the issuer claim in tokens affected by the policy.
+ - If set to `False`, the application ID isn't added to the issuer claim in tokens affected by the policy. (default)
+- **Summary** - Enables the application ID to be included in the issuer claim. Ensures that multiple instances of the same application have a unique claim value for each instance. This setting is ignored if a custom signing key isn't configured for the application.
+ ### Audience Override
-**String:** audienceOverride
-**Data type:** String
-**Summary:** This property enables the overriding of the audience claim sent to the application. The value provided must be a valid absolute URI. This setting is ignored if no custom signing key is configured for the application. 
+- **String** - audienceOverride
+- **Data type** - String
+- **Summary** - Enables you to override the audience claim sent to the application. The value provided must be a valid absolute URI. This setting is ignored if no custom signing key is configured for the application.
## Next steps -- To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md)-- To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- To learn more about extension attributes, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
+- To learn more about extension attributes, see [Directory extension attributes in claims](schema-extensions.md).
active-directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/saml-claims-customization.md
-# Customize claims issued in the SAML token for enterprise applications
+# Customize SAML token claims
-The Microsoft identity platform supports [single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md) with most preintegrated applications in the Azure Active Directory (Azure AD) application gallery and custom applications. When a user authenticates to an application through the Microsoft identity platform using the SAML 2.0 protocol, the Microsoft identity platform sends a token to the application. The application validates and uses the token to sign the user in instead of prompting for a username and password.
+The Microsoft identity platform supports [single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md) with most preintegrated applications in the application gallery and custom applications. When a user authenticates to an application through the Microsoft identity platform using the SAML 2.0 protocol, a token is sent to the application. The application validates and uses the token to sign the user in instead of prompting for a username and password.
-These SAML tokens contain pieces of information about the user known as *claims*. A claim is information that an identity provider states about a user inside the token they issue for that user. In a SAML token, claims data is typically contained in the SAML Attribute Statement. The user's unique ID is typically represented in the SAML Subject, which is also referred to as the name identifier (`nameID`).
+These SAML tokens contain pieces of information about the user known as *claims*. A claim is information that an identity provider states about a user inside the token they issue for that user. In a SAML token, claims data is typically contained in the SAML Attribute Statement. The user's unique ID is typically represented in the SAML subject, which is also referred to as the name identifier (`nameID`).
-By default, the Microsoft identity platform issues a SAML token to an application that contains a `NameIdentifier` claim with a value of the user's username (also known as the user principal name) in Azure AD, which can uniquely identify the user. The SAML token also contains other claims that include the user's email address, first name, and last name.
+By default, the Microsoft identity platform issues a SAML token to an application that contains a claim with a value of the user's username (also known as the user principal name), which can uniquely identify the user. The SAML token also contains other claims that include the user's email address, first name, and last name.
## View or edit claims
To view or edit the claims issued in the SAML token to the application, open the
You might need to edit the claims issued in the SAML token for the following reasons:
-* The application requires the `NameIdentifier` or `nameID` claim to be something other than the username (or user principal name) stored in Azure AD.
+* The application requires the `NameIdentifier` or `nameID` claim to be something other than the username (or user principal name).
* The application has been written to require a different set of claim URIs or claim values. ## Edit nameID
-To edit the `nameID` (name identifier value) claim:
+To edit the name identifier value claim:
1. Open the **Name identifier value** page. 1. Select the attribute or transformation that you want to apply to the attribute. Optionally, you can specify the format that you want the `nameID` claim to have.
If the SAML request doesn't contain an element for `NameIDPolicy`, then the Micr
From the **Choose name identifier format** dropdown, select one of the options in the following table. | `nameID` format | Description |
-||-|
+|--|-|
| **Default** | Microsoft identity platform uses the default source format. | | **Persistent** | Microsoft identity platform uses `Persistent` as the `nameID` format. | | **Email address** | Microsoft identity platform uses `EmailAddress` as the `nameID` format. |
Select the desired source for the `NameIdentifier` (or `nameID`) claim. You can
For more information about identifier values, see the table that lists the valid ID values per source later in this page.
-Any constant (static) value can be assigned to any claim that is defined in Azure AD. Use the following steps to assign a constant value:
+Any constant (static) value can be assigned to any claim. Use the following steps to assign a constant value:
1. In the [Azure portal](https://portal.azure.com/), in the **User Attributes & Claims** section, select **Edit** to edit the claims. 1. Select the required claim that you want to modify.
Any constant (static) value can be assigned to any claim that is defined in Azur
### Directory Schema extensions (Preview)
-You can also configure directory schema extension attributes as non-conditional/conditional attributes in Azure AD. Use the following steps to configure the single or multi-valued directory schema extension attribute as a claim:
+You can also configure directory schema extension attributes as non-conditional/conditional attributes. Use the following steps to configure the single or multi-valued directory schema extension attribute as a claim:
1. In the [Azure portal](https://portal.azure.com/), in the **User Attributes & Claims** section, select **Edit** to edit the claims. 1. Select **Add new claim** or edit an existing claim.
You can also configure directory schema extension attributes as non-conditional/
:::image type="content" source="./media/saml-claims-customization/mv-extension-2.jpg" alt-text="Screenshot of the source application selection in MultiValue extension configuration section in the Azure portal."::: 1. Select **Add** to add the selection to the claims.-
-<!
-5. To select single or multi-valued directory schema extension attribute as conditional attribute select **Directory schema extension** option from the source dropdown.
-
- :::image type="content" source="./media/active-directory-saml-claims-customization/mv-extension-3.png" alt-text="Screenshot of the MultiValue extension configuration for conditional claims section in the Azure portal.":::
->
-
-5. Click **Save** to commit the changes.
-
+1. Click **Save** to commit the changes.
## Special claims transformations
You can use the following functions to transform claims.
| Function | Description | |-|-|
-| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
-| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behavior when the transformation input has a domain part. It removes the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is 'joe_smith@contoso.com' and the separator is '@' and the parameter is 'fabrikam.com', this input combination results in 'joe_smith@fabrikam.com'. |
+| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name being passed through. For example, `joe_smith` instead of `joe_smith@contoso.com`. |
+| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For the `nameID` claim transformation, the **Join()** function has specific behavior when the transformation input has a domain part. It removes the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is `joe_smith@contoso.com` and the separator is `@` and the parameter is `fabrikam.com`, this input combination results in `joe_smith@fabrikam.com`. |
| **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. |
-| **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if there's no match. <br/>For example, if you want to emit a claim where the value is the user's email address if it contains the domain "@contoso.com", otherwise you want to output the user principal name. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
-| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with "000", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
-| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with "US", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
-| **Extract() - After matching** | Returns the substring after it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon", the matching value is "Finance_", then the claim's output is "BSimon". |
-| **Extract() - Before matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "BSimon_US", the matching value is "_US", then the claim's output is "BSimon". |
-| **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon_US", the first matching value is "Finance\_", the second matching value is "\_US", then the claim's output is "BSimon". |
-| **ExtractAlpha() - Prefix** | Returns the prefix alphabetical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "BSimon". |
-| **ExtractAlpha() - Suffix** | Returns the suffix alphabetical part of the string.<br/>For example, if the input's value is "123_Simon", then it returns "Simon". |
-| **ExtractNumeric() - Prefix** | Returns the prefix numerical part of the string.<br/>For example, if the input's value is "123_BSimon", then it returns "123". |
-| **ExtractNumeric() - Suffix** | Returns the suffix numerical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "123". |
-| **IfEmpty()** | Outputs an attribute or constant if the input is null or empty.<br/>For example, if you want to output an attribute stored in an extension attribute if the employee ID for a given user is empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1<br/>Parameter 3 (output if there's no match): user.employeeid |
-| **IfNotEmpty()** | Outputs an attribute or constant if the input isn't null or empty.<br/>For example, if you want to output an attribute stored in an extension attribute if the employee ID for a given user isn't empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1 |
-| **Substring() - Fixed Length** (Preview)| Extracts parts of a string claim type, beginning at the character at the specified position, and returns the specified number of characters.<br/>SourceClaim - The claim source of the transform that should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>Length - The length in characters of the substring.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Length - 11<br/>Output: ExtractThis |
-| **Substring() - EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. <br/>SourceClaim - The claim source of the transform that should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Output: ExtractThisNow |
-| **RegexReplace()** (Preview) | RegexReplace() transformation accepts as input parameters:<br/>- Parameter 1: a user attribute as regex input<br/>- An option to trust the source as multivalued<br/>- Regex pattern<br/>- Replacement pattern. The replacement pattern may contain static text format along with a reference that points to regex output groups and more input parameters.<br/><br/>More instructions about how to use the RegexReplace() transformation are described later in this article. |
-
-If you need other transformations, submit your idea in the [feedback forum in Azure AD](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) under the *SaaS application* category.
+| **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if there's no match. For example, if you want to emit a claim where the value is the user's email address if it contains the domain `@contoso.com`, otherwise you want to output the user principal name. To perform this function, configure the following values: `Parameter 1(input): user.email`, `Value: "@contoso.com"`, `Parameter 2 (output): user.email`, and `Parameter 3 (output if there's no match): user.userprincipalname`. |
+| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with `000`, otherwise you want to output an extension attribute. To perform this function, configure the following values: `Parameter 1(input): user.employeeid`, `Value: "000"`, `Parameter 2 (output): user.employeeid`, and `Parameter 3 (output if there's no match): user.extensionattribute1`. |
+| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match. For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with `US`, otherwise you want to output an extension attribute. To perform this function, configure the following values: `Parameter 1(input): user.country`, `Value: "US"`, `Parameter 2 (output): user.employeeid`, and `Parameter 3 (output if there's no match): user.extensionattribute1` |
+| **Extract() - After matching** | Returns the substring after it matches the specified value. For example, if the input's value is `Finance_BSimon`, the matching value is `Finance_`, then the claim's output is `BSimon`. |
+| **Extract() - Before matching** | Returns the substring until it matches the specified value. For example, if the input's value is `BSimon_US`, the matching value is `_US`, then the claim's output is `BSimon`. |
+| **Extract() - Between matching** | Returns the substring until it matches the specified value. For example, if the input's value is `Finance_BSimon_US`, the first matching value is `Finance_`, the second matching value is `_US`, then the claim's output is `BSimon`. |
+| **ExtractAlpha() - Prefix** | Returns the prefix alphabetical part of the string. For example, if the input's value is `BSimon_123`, then it returns `BSimon`. |
+| **ExtractAlpha() - Suffix** | Returns the suffix alphabetical part of the string. For example, if the input's value is `123_Simon`, then it returns `Simon`. |
+| **ExtractNumeric() - Prefix** | Returns the prefix numerical part of the string. For example, if the input's value is `123_BSimon`, then it returns `123`. |
+| **ExtractNumeric() - Suffix** | Returns the suffix numerical part of the string. For example, if the input's value is `BSimon_123`, then it returns `123`. |
+| **IfEmpty()** | Outputs an attribute or constant if the input is null or empty. For example, if you want to output an attribute stored in an extension attribute if the employee ID for a user is empty. To perform this function, configure the following values: `Parameter 1(input): user.employeeid`, `Parameter 2 (output): user.extensionattribute1`, and `Parameter 3 (output if there's no match): user.employeeid`. |
+| **IfNotEmpty()** | Outputs an attribute or constant if the input isn't null or empty. For example, if you want to output an attribute stored in an extension attribute if the employee ID for a user isn't empty. To perform this function, configure the following values: `Parameter 1(input): user.employeeid` and `Parameter 2 (output): user.extensionattribute1`. |
+| **Substring() - Fixed Length** (Preview)| Extracts parts of a string claim type, beginning at the character at the specified position, and returns the specified number of characters. The `sourceClaim` is the claim source of the transform that should be executed. The `StartIndex` is the zero-based starting character position of a substring in this instance. The `Length` is the length in characters of the substring. For example, `sourceClaim - PleaseExtractThisNow`, `StartIndex - 6`, and `Length - 11` produces an output of `ExtractThis`. |
+| **Substring() - EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. The `sourceClaim` is the claim source of the transform that should be executed. The `StartIndex` is the zero-based starting character position of a substring in this instance. For example, `sourceClaim - PleaseExtractThisNow` and `StartIndex - 6` produces an output of `ExtractThisNow`. |
+| **RegexReplace()** (Preview) | For more information about regex-based claims transformation, see the next section. |
## Regex-based claims transformation
The following image shows an example of the first level of transformation:
:::image type="content" source="./media/saml-claims-customization/regexreplace-transform1.png" alt-text="Screenshot of the first level of transformation.":::
-The following table provides information about the first level of transformations. The actions listed in the table correspond to the labels in the previous image. Select **Edit** to open the claims transformation blade.
+The actions listed in the following table provide information about the first level of transformations and correspond to the labels in the previous image. Select **Edit** to open the claims transformation blade.
| Action | Field | Description | | :-- | :- | :- |
-| 1 | `Transformation` | Select the **RegexReplace()** option from the **Transformation** options to use the regex-based claims transformation method for claims transformation. |
-| 2 | `Parameter 1` | The input for the regular expression transformation. For example, user.mail that has a user email address such as `admin@fabrikam.com`. |
-| 3 | `Treat source as multivalued` | Some input user attributes can be multi-value user attributes. If the selected user attribute supports multiple values and the user wants to use multiple values for the transformation, they need to select **Treat source as multivalued**. If selected, all values are used for the regex match, otherwise only the first value is used. |
-| 4 | `Regex pattern` | A regular expression that is evaluated against the value of user attribute selected as *Parameter 1*. For example, a regular expression to extract the user alias from the user's email address would be represented as `(?'domain'^.*?)(?i)(\@fabrikam\.com)$`. |
-| 5 | `Add additional parameter` | More than one user attribute can be used for the transformation. The values of the attributes would then be merged with regex transformation output. Up to five more parameters are supported. |
-| 6 | `Replacement pattern` | The replacement pattern is the text template, which contains placeholders for regex outcome. All group names must be wrapped inside the curly braces such as `{group-name}`. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
+| `1` | `Transformation` | Select the **RegexReplace()** option from the **Transformation** options to use the regex-based claims transformation method for claims transformation. |
+| `2` | `Parameter 1` | The input for the regular expression transformation. For example, user.mail that has a user email address such as `admin@fabrikam.com`. |
+| `3` | `Treat source as multivalued` | Some input user attributes can be multi-value user attributes. If the selected user attribute supports multiple values and the user wants to use multiple values for the transformation, they need to select **Treat source as multivalued**. If selected, all values are used for the regex match, otherwise only the first value is used. |
+| `4` | `Regex pattern` | A regular expression that is evaluated against the value of user attribute selected as *Parameter 1*. For example, a regular expression to extract the user alias from the user's email address would be represented as `(?'domain'^.*?)(?i)(\@fabrikam\.com)$`. |
+| `5` | `Add additional parameter` | More than one user attribute can be used for the transformation. The values of the attributes would then be merged with regex transformation output. Up to five more parameters are supported. |
+| `6` | `Replacement pattern` | The replacement pattern is the text template, which contains placeholders for regex outcome. All group names must be wrapped inside the curly braces such as `{group-name}`. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
The following image shows an example of the second level of transformation:
The following table provides information about the second level of transformatio
| Action | Field | Description | | :-- | :- | :- |
-| 1 | `Transformation` | Regex-based claims transformations aren't limited to the first transformation and can be used as the second level transformation as well. Any other transformation method can be used as the first transformation. |
-| 2 | `Parameter 1` | If **RegexReplace()** is selected as a second level transformation, output of first level transformation is used as an input for the second level transformation. To apply the transformation, the second level regex expression should match the output of the first transformation. |
-| 3 | `Regex pattern` | **Regex pattern** is the regular expression for the second level transformation. |
-| 4 | `Parameter input` | User attribute inputs for the second level transformations. |
-| 5 | `Parameter input` | Administrators can delete the selected input parameter if they don't need it anymore. |
-| 6 | `Replacement pattern` | The replacement pattern is the text template, which contains placeholders for regex outcome group name, input parameter group name, and static text value. All group names must be wrapped inside the curly braces such as `{group-name}`. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and {domain} is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
-| 7 | `Test transformation` | The RegexReplace() transformation is evaluated only if the value of the selected user attribute for *Parameter 1* matches with the regular expression provided in the **Regex pattern** textbox. If they don't match, the default claim value is added to the token. To validate regular expression against the input parameter value, a test experience is available within the transform blade. This test experience operates on dummy values only. When more input parameters are used, the name of the parameter is added to the test result instead of the actual value. To access the test section, select **Test transformation**. |
+| `1` | `Transformation` | Regex-based claims transformations aren't limited to the first transformation and can be used as the second level transformation as well. Any other transformation method can be used as the first transformation. |
+| `2` | `Parameter 1` | If **RegexReplace()** is selected as a second level transformation, output of first level transformation is used as an input for the second level transformation. To apply the transformation, the second level regex expression should match the output of the first transformation. |
+| `3` | `Regex pattern` | **Regex pattern** is the regular expression for the second level transformation. |
+| `4` | `Parameter input` | User attribute inputs for the second level transformations. |
+| `5` | `Parameter input` | Administrators can delete the selected input parameter if they don't need it anymore. |
+| `6` | `Replacement pattern` | The replacement pattern is the text template, which contains placeholders for regex outcome group name, input parameter group name, and static text value. All group names must be wrapped inside the curly braces such as `{group-name}`. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and {domain} is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
+| `7` | `Test transformation` | The RegexReplace() transformation is evaluated only if the value of the selected user attribute for *Parameter 1* matches with the regular expression provided in the **Regex pattern** textbox. If they don't match, the default claim value is added to the token. To validate regular expression against the input parameter value, a test experience is available within the transform blade. This test experience operates on dummy values only. When more input parameters are used, the name of the parameter is added to the test result instead of the actual value. To access the test section, select **Test transformation**. |
The following image shows an example of testing the transformations:
The following table provides information about testing the transformations. The
| Action | Field | Description | | :-- | :- | :- |
-| 1 | `Test transformation` | Select the close or (X) button to hide the test section and re-render the **Test transformation** button again on the blade. |
-| 2 | `Test regex input` | Accepts input that is used for the regular expression test evaluation. In case regex-based claims transformation is configured as a second level transformation, provide a value that is the expected output of the first transformation. |
-| 3 | `Run test` | After the test regex input is provided and the **Regex pattern**, **Replacement pattern** and **Input parameters** are configured, the expression can be evaluated by selecting **Run test**. |
-| 4 | `Test transformation result` | If evaluation succeeds, an output of test transformation is rendered against the **Test transformation result** label. |
-| 5 | `Remove transformation` | The second level transformation can be removed by selecting **Remove transformation**. |
-| 6 | `Specify output if no match` | When a regex input value is configured against the *Parameter 1* that doesn't match the **Regular expression**, the transformation is skipped. In such cases, the alternate user attribute can be configured, which is added to the token for the claim by checking **Specify output if no match**. |
-| 7 | `Parameter 3` | If an alternate user attribute needs to be returned when there's no match and **Specify output if no match** is checked, an alternate user attribute can be selected using the dropdown. This dropdown is available against **Parameter 3 (output if no match)**. |
-| 8 | `Summary` | At the bottom of the blade, a full summary of the format is displayed that explains the meaning of the transformation in simple text. |
-| 9 | `Add` | After the configuration settings for the transformation are verified, it can be saved to a claims policy by selecting **Add**. Select **Save** on the **Manage Claim** blade to save the changes. |
+| `1` | `Test transformation` | Select the close or (X) button to hide the test section and re-render the **Test transformation** button again on the blade. |
+| `2` | `Test regex input` | Accepts input that is used for the regular expression test evaluation. In case regex-based claims transformation is configured as a second level transformation, provide a value that is the expected output of the first transformation. |
+| `3` | `Run test` | After the test regex input is provided and the **Regex pattern**, **Replacement pattern** and **Input parameters** are configured, the expression can be evaluated by selecting **Run test**. |
+| `4` | `Test transformation result` | If evaluation succeeds, an output of test transformation is rendered against the **Test transformation result** label. |
+| `5` | `Remove transformation` | The second level transformation can be removed by selecting **Remove transformation**. |
+| `6` | `Specify output if no match` | When a regex input value is configured against the *Parameter 1* that doesn't match the **Regular expression**, the transformation is skipped. In such cases, the alternate user attribute can be configured, which is added to the token for the claim by checking **Specify output if no match**. |
+| `7` | `Parameter 3` | If an alternate user attribute needs to be returned when there's no match and **Specify output if no match** is checked, an alternate user attribute can be selected using the dropdown. This dropdown is available against **Parameter 3 (output if no match)**. |
+| `8` | `Summary` | At the bottom of the blade, a full summary of the format is displayed that explains the meaning of the transformation in simple text. |
+| `9` | `Add` | After the configuration settings for the transformation are verified, it can be saved to a claims policy by selecting **Add**. Select **Save** on the **Manage Claim** blade to save the changes. |
RegexReplace() transformation is also available for the group claims transformations.
When the following conditions occur after **Add** or **Run test** is selected, a
## Add the UPN claim to SAML tokens
-The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md#table-2-saml-restricted-claim-set), so you can't add it in the **Attributes & Claims** section. As a workaround, you can add it as an [optional claim](active-directory-optional-claims.md) through **App registrations** in the Azure portal.
+The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md), so you can't add it in the **Attributes & Claims** section. As a workaround, you can add it as an [optional claim](active-directory-optional-claims.md) through **App registrations** in the Azure portal.
Open the application in **App registrations**, select **Token configuration**, and then select **Add optional claim**. Select the **SAML** token type, choose **upn** from the list, and then click **Add** to add the claim to the token.
First, the Microsoft identity platform verifies whether Britta's user type is **
:::image type="content" source="./media/saml-claims-customization/mv-extension-3.png" alt-text="Screenshot of claims conditional configuration.":::
-As another example, consider when Britta Simon tries to sign in and the following configuration is used. Azure AD first evaluates all conditions with source `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta.
+As another example, consider when Britta Simon tries to sign in and the following configuration is used. All conditions are first evaluated with the source of `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, the transformations are evaluated. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta.
:::image type="content" source="./media/saml-claims-customization/sso-saml-user-conditional-claims-2.png" alt-text="Screenshot of more claims conditional configuration.":::
active-directory How To Desktop App Maui Sample Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-desktop-app-maui-sample-sign-in.md
In this article, you do the following tasks:
## Prerequisites -- [Visual Studio Code](https://code.visualstudio.com/download) with the MAUI workload installed:
+- [.NET 7.0 SDK](https://dotnet.microsoft.com/download/dotnet/7.0)
+- [Visual Studio 2022](https://aka.ms/vsdownloads) with the MAUI workload installed:
- [Instructions for Windows](/dotnet/maui/get-started/installation?tabs=vswin) - [Instructions for macOS](/dotnet/maui/get-started/installation?tabs=vsmac) - Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
active-directory How To Mobile App Maui Sample Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-mobile-app-maui-sample-sign-in.md
In this article, you do the following tasks:
## Prerequisites -- [Visual Studio Code](https://code.visualstudio.com/download) with the MAUI workload installed:
+- [.NET 7.0 SDK](https://dotnet.microsoft.com/download/dotnet/7.0)
+- [Visual Studio 2022](https://aka.ms/vsdownloads) with the MAUI workload installed:
- [Instructions for Windows](/dotnet/maui/get-started/installation?tabs=vswin) - [Instructions for macOS](/dotnet/maui/get-started/installation?tabs=vsmac) - Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl).
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Previously, Conditional Access policies applied only to users when they access a
**Service category:** Enterprise Apps **Product capability:** SSO
-Several user attributes have been added to the list of attributes available to map to claims to bring attributes available in claims more in line with what is available on the user object in Microsoft Graph. New attributes include mobilePhone and ProxyAddresses. [Learn more](../develop/reference-claims-mapping-policy-type.md#table-3-valid-id-values-per-source).
+Several user attributes have been added to the list of attributes available to map to claims to bring attributes available in claims more in line with what is available on the user object in Microsoft Graph. New attributes include mobilePhone and ProxyAddresses. [Learn more](../develop/reference-claims-mapping-policy-type.md).
active-directory Migrate Adfs Application Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
-+ Last updated 03/23/2023
active-directory Migrate Adfs Apps Phases Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-phases-overview.md
+
+ Title: 'Plan application migration to Azure Active Directory'
+description: This article discusses the advantages of Azure Active Directory and provides a four-phase guide for planning and executing a migration strategy with detailed planning and exit criteria.
+++++++ Last updated : 05/31/2023+++++
+# Plan application migration to Azure Active Directory
+
+In this article, you'll learn about the benefits of Azure Active Directory (Azure AD) and how to plan for migrating your application authentication. This article gives an overview of the planning and exit criteria to help you plan your migration strategy and understand how Azure AD authentication can support your organizational goals.
+
+The process is broken into four phases, each with detailed planning and exit criteria, and designed to help you plan your migration strategy and understand how Azure AD authentication supports your organizational goals.
+
+> [!VIDEO https://www.youtube.com/embed/8WmquuuuaLk]
+
+## Introduction
+
+Today, your organization requires numerous applications for users to get work done. You likely continue to add, develop, or retire apps every day. Users access these applications from a vast range of corporate and personal devices, and locations. They open apps in many ways, including:
+
+- Through a company homepage or portal
+- By bookmarking or adding favorites on their browsers
+- Through a vendorΓÇÖs URL for software as a service (SaaS) apps
+- Links pushed directly to userΓÇÖs desktops or mobile devices via a mobile device/application management (MDM/ MAM) solution
+
+Your applications are likely using the following types of authentication:
+
+- Security Assertion Markup Language (SAML) or OpenID Connect (OIDC) via an on-premises or cloud-hosted Identity and Access Management (IAM) solutions federation solution (such as Active Directory Federation Services (ADFS), Okta, or Ping)
+
+- Kerberos or NTLM via Active Directory
+
+- Header-based authentication via Ping Access
+
+To ensure that the users can easily and securely access applications, your goal is to have a single set of access controls and policies across your on-premises and cloud environments.
+
+[Azure AD](../fundamentals/active-directory-whatis.md) offers a universal identity platform that provides your employees, partners, and customers a single identity to access the applications they want and collaborate from any platform and device.
++
+Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad). Standardizing your app authentication and authorization to Azure AD gets you the benefits that these capabilities provide.
+
+You can find more migration resources at [https://aka.ms/migrateapps](./migration-resources.md)
+
+## Plan your migration phases and project strategy
+
+When technology projects fail, it's often due to mismatched expectations, the right stakeholders not being involved, or a lack of communication. Ensure your success by planning the project itself.
+
+### The phases of migration
+
+Before we get into the tools, you should understand how to think through the migration process. Through several direct-to-customer workshops, we recommend the following four phases:
++
+### Assemble the project team
+
+Application migration is a team effort, and you need to ensure that you've all the vital positions filled. Support from senior business leaders is important. Ensure that you involve the right set of executive sponsors, business decision-makers, and subject matter experts (SMEs.)
+
+During the migration project, one person may fulfill multiple roles, or multiple people fulfill each role, depending on your organizationΓÇÖs size and structure. You may also have a dependency on other teams that play a key role in your security landscape.
+
+The following table includes the key roles and their contributions:
+
+| Role | Contributions |
+| - | - |
+| **Project Manager** | Project coach accountable for guiding the project, including:<br /> - gain executive support<br /> - bring in stakeholders<br /> - manage schedules, documentation, and communications |
+| **Identity Architect / Azure AD App Administrator** | Responsible for the following:<br /> - design the solution in cooperation with stakeholders<br /> - document the solution design and operational procedures for handoff to the operations team<br /> - manage the preproduction and production environments |
+| **On premises AD operations team** | The organization that manages the different on-premises identity sources such as AD forests, LDAP directories, HR systems etc.<br /> - perform any remediation tasks needed before synchronizing<br /> - Provide the service accounts required for synchronization<br /> - provide access to configure federation to Azure AD |
+| **IT Support Manager** | A representative from the IT support organization who can provide input on the supportability of this change from a helpdesk perspective. |
+| **Security Owner** | A representative from the security team that can ensure that the plan meets the security requirements of your organization. |
+| **Application technical owners** | Includes technical owners of the apps and services that integrate with Azure AD. They provide the applicationsΓÇÖ identity attributes that should include in the synchronization process. They usually have a relationship with CSV representatives. |
+| **Application business Owners** | Representative colleagues who can provide input on the user experience and usefulness of this change from a userΓÇÖs perspective and owns the overall business aspect of the application, which may include managing access. |
+| **Pilot group of users** | Users who test as a part of their daily work, the pilot experience, and provide feedback to guide the rest of the deployments. |
+
+### Plan communications
+
+Effective business engagement and communication are the keys to success. It's important to give stakeholders and end-users an avenue to get information and keep informed of schedule updates. Educate everyone about the value of the migration, what the expected timelines are, and how to plan for any temporary business disruption. Use multiple avenues such as briefing sessions, emails, one-to-one meetings, banners, and town halls.
+
+Based on the communication strategy that you've chosen for the app you may want to remind users of the pending downtime. You should also verify that there are no recent changes or business impacts that would require to postpone the deployment.
+
+In the following table, you find the minimum suggested communication to keep your stakeholders informed:
+
+## Plan phases and project strategy
+
+| Communication | Audience |
+| | - |
+| Awareness and business / technical value of project | All except end users |
+| Solicitation for pilot apps | - App business owners<br />- App technical owners<br />- Architects and Identity team |
+
+**Phase 1- Discover and Scope**:
+
+| Communication | Audience |
+| | - |
+| - Solicitation for application information<br />- Outcome of scoping exercise | - App technical owners<br />- App business owners |
+
+**Phase 2- Classify apps and plan pilot**:
+
+| Communication | Audience |
+| | - |
+| - Outcome of classifications and what that means for migration schedule<br />- Preliminary migration schedule | - App technical owners<br /> - App business owners |
+
+**Phase 3 ΓÇô Plan migration and testing**:
+
+| Communication | Audience |
+| | - |
+| - Outcome of application migration testing | - App technical owners<br />- App business owners |
+| - Notification that migration is coming and explanation of resultant <br/>end-user experiences.<br />- Downtimes coming and complete communications, including what<br/> they should now do, feedback, and how to get help | - End users (and all others) |
+
+**Phase 4 ΓÇô Manage and gain insights**:
+
+| Communication | Audience |
+| | - |
+| Available analytics and how to access | - App technical owners<br />- App business owners |
+
+## Migration states communication dashboard
+
+Communicating the overall state of the migration project is crucial, as it shows progress, and helps app owners whose apps are coming up for migration to prepare for the move. You can put together a simple dashboard using Power BI or other reporting tools to provide visibility into the status of applications during the migration.
+
+The migration states you might consider using are as follows:
+
+| Migration states | Action plan |
+| - | |
+| **Initial Request** | Find the app and contact the owner for more information |
+| **Assessment Complete** | App owner evaluates the app requirements and returns the app questionnaire</td>
+| **Configuration in Progress** | Develop the changes necessary to manage authentication against Azure AD |
+| **Test Configuration Successful** | Evaluate the changes and authenticate the app against the test Azure AD tenant in the test environment |
+| **Production Configuration Successful** | Change the configurations to work against the production AD tenant and assess the app authentication in the test environment |
+| **Complete / Sign Off** | Deploy the changes for the app to the production environment and execute against the production Azure AD tenant |
+
+This ensures app owners know what the app migration and testing schedule are when their apps are up for migration, and what the results are from other apps that have already been migrated. You might also consider providing links to your bug tracker database for owners to be able to file and view issues for apps that are being migrated.
+
+## Best practices
+
+The following articles are about our customer and partnerΓÇÖs success stories, and suggested best practices:
+
+- [Five tips to improve the migration process to Azure Active Directory](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Five-tips-to-improve-the-migration-process-to-Azure-Active/ba-p/445364) by Patriot Consulting, a member of our partner network that focuses on helping customers deploy Microsoft cloud solutions securely.
+
+- [Develop a risk management strategy for your Azure AD application migration](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Develop-a-risk-management-strategy-for-your-Azure-AD-application/ba-p/566488) by Edgile, a partner that focuses on IAM and risk management solutions.
+
+## Next steps
+
+- [Phase 1 - Discover and Scope](migrate-adfs-discover-scope-apps.md).
active-directory Migrate Adfs Apps Stages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-stages.md
+
+ Title: 'Understand the stages of migrating application authentication from AD FS to Azure AD'
+description: This article provides the stages of the migration process and what types of applications to migrate.
+++++++ Last updated : 05/31/2023+++++
+# Understand the stages of migrating application authentication from AD FS to Azure AD
+
+Azure Active Directory (Azure AD) offers a universal identity platform that provides your people, partners, and customers a single identity to access applications and collaborate from any platform and device. Azure AD has a full suite of identity management capabilities. Standardizing your application authentication and authorization to Azure AD provides these benefits.
+
+## Types of apps to migrate
+
+Your applications may use modern or legacy protocols for authentication. When you plan your migration to Azure AD, consider migrating the apps that use modern authentication protocols (such as SAML and Open ID Connect) first.
+
+These apps can be reconfigured to authenticate with Azure AD either via a built-in connector from the Azure App Gallery, or by registering the custom application in Azure AD.
+
+Apps that use older protocols can be integrated using [Application Proxy](../app-proxy/what-is-application-proxy.md) or any of our [Secure Hybrid Access (SHA) partners](secure-hybrid-access-integrations.md).
+
+For more information, see:
+
+* [Using Azure AD Application Proxy to publish on-premises apps for remote users](../app-proxy/what-is-application-proxy.md).
+* [What is application management?](what-is-application-management.md)
+* [AD FS application activity report to migrate applications to Azure AD](migrate-adfs-application-activity.md).
+* [Monitor AD FS using Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md).
+
+## The migration process
+
+During the process of moving your app authentication to Azure AD, test your apps and configuration. We recommend that you continue to use existing test environments for migration testing before you move to the production environment. If a test environment isn't currently available, you can set one up using [Azure App Service](https://azure.microsoft.com/services/app-service/) or [Azure Virtual Machines](https://azure.microsoft.com/free/virtual-machines/search/?OCID=AID2000128_SEM_lHAVAxZC&MarinID=lHAVAxZC_79233574796345_azure%20virtual%20machines_be_c__1267736956991399_kwd-79233582895903%3Aloc-190&lnkd=Bing_Azure_Brand&msclkid=df6ac75ba7b612854c4299397f6ab5b0&ef_id=XmAptQAAAJXRb3S4%3A20200306231230%3As&dclid=CjkKEQiAhojzBRDg5ZfomsvdiaABEiQABCU7XjfdCUtsl-Abe1RAtAT35kOyI5YKzpxRD6eJS2NM97zw_wcB), depending on the architecture of the application.
+
+You may choose to set up a separate test Azure AD tenant on which to develop your app configurations.
+
+Your migration process may look like this:
+
+### Stage 1 ΓÇô Current state: The production app authenticates with AD FS
++
+### Stage 2 ΓÇô (Optional) Point a test instance of the app to the test Azure AD tenant
+
+Update the configuration to point your test instance of the app to a test Azure AD tenant, and make any required changes. The app can be tested with users in the test Azure AD tenant. During the development process, you can use tools such as [Fiddler](https://www.telerik.com/fiddler) to compare and verify requests and responses.
+
+If it isn't feasible to set up a separate test tenant, skip this stage and point a test instance of the app to your production Azure AD tenant as described in Stage 3 below.
++
+### Stage 3 ΓÇô Point a test instance of the app to the production Azure AD tenant
+
+Update the configuration to point your test instance of the app to your production Azure AD tenant. You can now test with users in your production tenant. If necessary, review the section of this article on transitioning users.
++
+### Stage 4 ΓÇô Point the production app to the production Azure AD tenant
+
+Update the configuration of your production app to point to your production Azure AD tenant.
++
+ Apps that authenticate with AD FS can use Active Directory groups for permissions. Use [Azure AD Connect sync](../hybrid/how-to-connect-sync-whatis.md) to sync identity data between your on-premises environment and Azure AD before you begin migration. Verify those groups and membership before migration so that you can grant access to the same users when the application is migrated.
+
+## Line of business apps
+
+Your line-of-business apps are those that your organization developed or those that are a standard packaged product.
+
+Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Entra portal](https://entra.microsoft.com/#home).
+
+## Next steps
+
+- [Configure SAML-based single sign-on](migrate-adfs-saml-based-sso.md).
active-directory Migrate Adfs Classify Apps Plan Pilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-classify-apps-plan-pilot.md
-+ Last updated 05/30/2023
active-directory Migrate Adfs Discover Scope Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-discover-scope-apps.md
-+ Last updated 05/30/2023
active-directory Migrate Adfs Plan Management Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-plan-management-insights.md
-+ Last updated 05/30/2023
active-directory Migrate Adfs Plan Migration Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-plan-migration-test.md
-+ Last updated 05/30/2023
active-directory Migrate Adfs Represent Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-represent-security-policies.md
-+ Last updated 05/31/2023
active-directory Migrate Adfs Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-saml-based-sso.md
-+ Last updated 05/31/2023
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md
Previously updated : 01/25/2023 Last updated : 06/09/2023
# Create or delete administrative units
+> [!IMPORTANT]
+> Restricted management administrative units are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ Administrative units let you subdivide your organization into any unit that you want, and then assign specific administrators that can manage only the members of that unit. For example, you could use administrative units to delegate permissions to administrators of each school at a large university, so they could control access, manage users, and set policies only in the School of Engineering. This article describes how to create or delete administrative units to restrict the scope of role permissions in Azure Active Directory (Azure AD).
This article describes how to create or delete administrative units to restrict
- Azure AD Premium P1 or P2 license for each administrative unit administrator - Azure AD Free licenses for administrative unit members-- Privileged Role Administrator or Global Administrator
+- Privileged Role Administrator role
+- Microsoft.Graph module when using [Microsoft Graph PowerShell](/powershell/microsoftgraph/installation)
- AzureAD module when using PowerShell
+- AzureADPreview module when using PowerShell and restricted management administrative units
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
You can create a new administrative unit by using either the Azure portal, Power
1. In the **Name** box, enter the name of the administrative unit. Optionally, add a description of the administrative unit.
+1. If you don't want tenant-level administrators to be able to access this administrative unit, set the **Restricted management administrative unit** toggle to **Yes**. For more information, see [Restricted management administrative units](admin-units-restricted-management.md).
+ ![Screenshot showing the Add administrative unit page and the Name box for entering the name of the administrative unit.](./media/admin-units-manage/add-new-admin-unit.png) 1. Optionally, on the **Assign roles** tab, select a role and then select the users to assign the role to with this administrative unit scope.
You can create a new administrative unit by using either the Azure portal, Power
### PowerShell
-Use the [New-AzureADMSAdministrativeUnit](/powershell/module/azuread/new-azureadmsadministrativeunit) command to create a new administrative unit.
+# [Microsoft Graph PowerShell](#tab/ms-powershell)
+
+Use the [Connect-MgGraph](/powershell/microsoftgraph/authentication-commands?branch=main#using-connect-mggraph) command to sign in to your tenant and consent to the required permissions.
```powershell
-New-AzureADMSAdministrativeUnit -Description "West Coast region" -DisplayName "West Coast"
+Connect-MgGraph -Scopes "AdministrativeUnit.ReadWrite.All"
```
-### Microsoft Graph PowerShell
+Use the [New-MgDirectoryAdministrativeUnit](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdirectoryadministrativeunit?branch=main) command to create a new administrative unit.
-Use the [New-MgDirectoryAdministrativeUnit](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdirectoryadministrativeunit) command to create a new administrative unit.
+```powershell
+$params = @{
+ DisplayName = "Seattle District Technical Schools"
+ Description = "Seattle district technical schools administration"
+ Visibility = "HiddenMembership"
+}
+$adminUnitObj = New-MgDirectoryAdministrativeUnit -BodyParameter $params
+```
+
+Use the [New-MgDirectoryAdministrativeUnit (beta)](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdirectoryadministrativeunit?view=graph-powershell-beta&preserve-view=true&branch=main) command to create a new restricted management administrative unit. Set the `IsMemberManagementRestricted` property to `$true`.
```powershell
-Import-Module Microsoft.Graph.Identity.DirectoryManagement
+Select-MgProfile -Name beta
$params = @{
- DisplayName = "Seattle District Technical Schools"
- Description = "Seattle district technical schools administration"
- Visibility = "HiddenMembership"
+ DisplayName = "Contoso Executive Division"
+ Description = "Contoso Executive Division administration"
+ Visibility = "HiddenMembership"
+ IsMemberManagementRestricted = $true
}
-New-MgDirectoryAdministrativeUnit -BodyParameter $params
+$restrictedAU = New-MgDirectoryAdministrativeUnit -BodyParameter $params
```
+# [Azure AD PowerShell](#tab/aad-powershell)
++
+Use the [New-AzureADMSAdministrativeUnit](/powershell/module/azuread/new-azureadmsadministrativeunit?branch=main) command to create a new administrative unit.
+
+```powershell
+$adminUnitObj = New-AzureADMSAdministrativeUnit -Description "West Coast region" -DisplayName "West Coast"
+```
+
+Use the [New-AzureADMSAdministrativeUnit (preview)](/powershell/module/azuread/new-azureadmsadministrativeunit?view=azureadps-2.0-preview&preserve-view=true&branch=main) command to create a new restricted management administrative unit. Set the `IsMemberManagementRestricted` parameter to `$true`.
+
+```powershell
+$restrictedAU = New-AzureADMSAdministrativeUnit -DisplayName "Contoso Executive Division" -IsMemberManagementRestricted $true
+```
+++ ### Microsoft Graph API
-Use the [Create administrativeUnit](/graph/api/administrativeunit-post-administrativeunits) API to create a new administrative unit.
+Use the [Create administrativeUnit](/graph/api/administrativeunit-post-administrativeunits?branch=main) API to create a new administrative unit.
Request
Body
} ```
+Use the [Create administrativeUnit (beta)](/graph/api/directory-post-administrativeunits?view=graph-rest-beta&preserve-view=true&branch=main) API to create a new restricted management administrative unit. Set the `isMemberManagementRestricted` property to `true`.
+
+Request
+
+```http
+POST https://graph.microsoft.com/beta/administrativeUnits
+```
+
+Body
+
+```http
+{
+ "displayName": "Contoso Executive Division",
+ "description": "This administrative unit contains executive accounts of Contoso Corp.",
+ "isMemberManagementRestricted": true
+}
+```
+ ## Delete an administrative unit In Azure AD, you can delete an administrative unit that you no longer need as a unit of scope for administrative roles. Before you delete the administrative unit, you should remove any role assignments with that administrative unit scope.
In Azure AD, you can delete an administrative unit that you no longer need as a
### PowerShell
-Use the [Remove-AzureADMSAdministrativeUnit](/powershell/module/azuread/remove-azureadmsadministrativeunit) command to delete an administrative unit.
+# [Microsoft Graph PowerShell](#tab/ms-powershell)
+
+Use the [Remove-MgDirectoryAdministrativeUnit](/powershell/module/microsoft.graph.identity.directorymanagement/remove-mgdirectoryadministrativeunit?branch=main) command to delete an administrative unit.
+
+```powershell
+$adminUnitObj = Get-MgDirectoryAdministrativeUnit -Filter "DisplayName eq 'Seattle District Technical Schools'"
+Remove-MgDirectoryAdministrativeUnit -AdministrativeUnitId $adminUnitObj.Id
+```
+
+# [Azure AD PowerShell](#tab/aad-powershell)
++
+Use the [Remove-AzureADMSAdministrativeUnit](/powershell/module/azuread/remove-azureadmsadministrativeunit?branch=main) command to delete an administrative unit.
```powershell
-$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'DeleteMe Admin Unit'"
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "DisplayName eq 'Seattle District Technical Schools'"
Remove-AzureADMSAdministrativeUnit -Id $adminUnitObj.Id ``` ++ ### Microsoft Graph API Use the [Delete administrativeUnit](/graph/api/administrativeunit-delete) API to delete an administrative unit.
DELETE https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-uni
- [Add users, groups, or devices to an administrative unit](admin-units-members-add.md) - [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)
+- [Azure AD administrative units: Troubleshooting and FAQ](admin-units-faq-troubleshoot.yml)
active-directory Admin Units Members Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-add.md
Previously updated : 10/05/2022 Last updated : 06/09/2023
# Add users, groups, or devices to an administrative unit
-In Azure Active Directory (Azure AD), you can add users, groups, or devices to an administrative unit to restrict the scope of role permissions. Adding a group to an administrative unit brings the group itself into the management scope of the administrative unit, but **not** the members of the group. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md).
+In Azure Active Directory (Azure AD), you can add users, groups, or devices to an administrative unit to limit the scope of role permissions. Adding a group to an administrative unit brings the group itself into the management scope of the administrative unit, but **not** the members of the group. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md).
This article describes how to add users, groups, or devices to administrative units manually. For information about how to add users or devices to administrative units dynamically using rules, see [Manage users or devices for an administrative unit with dynamic membership rules](admin-units-members-dynamic.md).
active-directory Admin Units Members List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md
Previously updated : 06/01/2022 Last updated : 06/09/2023
You can list the users, groups, or devices in administrative units using the Azu
![Screenshot of All devices page with an administrative unit filter.](./media/admin-units-members-list/device-admin-unit-filter.png)
+### List the restricted management administrative units for a single user or group
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory**.
+
+1. Select **Users** or **Groups** and then select the user or group you want to list their restricted management administrative units.
+
+1. Select **Administrative units** to list all the administrative units where the user or group is a member.
+
+1. In the **Restricted management** column, look for administrative units that are set to **Yes**.
+
+ ![Screenshot of the Administrative units page with the Restricted management column.](./media/admin-units-members-list/list-restricted-management-admin-unit.png)
+ ## PowerShell Use the [Get-AzureADMSAdministrativeUnit](/powershell/module/azuread/get-azureadmsadministrativeunit) and [Get-AzureADMSAdministrativeUnitMember](/powershell/module/azuread/get-azureadmsadministrativeunitmember) commands to list users or groups for an administrative unit.
foreach ($member in (Get-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id)
## Microsoft Graph API
-Use the [List members](/graph/api/administrativeunit-list-members) API to list users or groups for an administrative unit.
-
-Use the [List members (Beta)](/graph/api/administrativeunit-list-members?view=graph-rest-beta&preserve-view=true) API to list devices for an administrative unit.
- ### List the administrative units for a user
+Use the user [List memberOf](/graph/api/user-list-memberof) API to list the administrative units a user is a direct member of.
+ ```http GET https://graph.microsoft.com/v1.0/users/{user-id}/memberOf/$/Microsoft.Graph.AdministrativeUnit ``` ### List the administrative units for a group
+Use the group [List memberOf](/graph/api/group-list-memberof) API to list the administrative units a group is a direct member of.
+ ```http GET https://graph.microsoft.com/v1.0/groups/{group-id}/memberOf/$/Microsoft.Graph.AdministrativeUnit ``` ### List the administrative units for a device
+Use the [List device memberships](/graph/api/device-list-memberof) API to list the administrative units a device is a direct member of.
+ ```http
-GET https://graph.microsoft.com/beta/devices/{device-id}/memberOf/$/Microsoft.Graph.AdministrativeUnit
+GET https://graph.microsoft.com/v1.0/devices/{device-id}/memberOf/$/Microsoft.Graph.AdministrativeUnit
```
-### List the groups for an administrative unit
+### List the users, groups, or devices for an administrative unit
+
+Use the [List members](/graph/api/administrativeunit-list-members) API to list the users, groups, or devices for an administrative unit. For member type, specify `microsoft.graph.user`, `microsoft.graph.group`, or `microsoft.graph.device`.
```http GET https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/$/microsoft.graph.group ```
-### List the devices for an administrative unit
+### List whether a single user is in a restricted management administrative unit
+
+Use the [Get a user (beta)](/graph/api/user-get?view=graph-rest-beta&preserve-view=true) API to determine whether a user is in a restricted management administrative unit. Look at the value of the `isManagementRestricted` property. If the property is `true`, it is in a restricted management administrative unit. If the property is `false`, empty, or null, it is not in a restricted management administrative unit.
```http
-GET https://graph.microsoft.com/beta/administrativeUnits/{admin-unit-id}/members/$/microsoft.graph.device
+GET https://graph.microsoft.com/beta/users/{user-id}
```
+Response
+
+```
+{
+ "displayName": "John",
+ "isManagementRestricted": true,
+ "userPrincipalName": "john@contoso.com",
+}
+```
## Next steps
active-directory Admin Units Members Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-remove.md
Previously updated : 03/22/2022 Last updated : 06/09/2023
Remove-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitId -MemberId $devic
``` ## Microsoft Graph API
-Use the [Remove a member](/graph/api/administrativeunit-delete-members) API to remove users or groups from an administrative unit.
+Use the [Remove a member](/graph/api/administrativeunit-delete-members) API to remove users, groups, or devices from an administrative unit. For `{member-id}`, specify the user, group, or device ID.
-Use the [Remove a member (Beta)](/graph/api/administrativeunit-delete-members?view=graph-rest-beta&preserve-view=true) API to remove devices from an administrative unit.
-
-### Remove users from an administrative unit
-
-```http
-DELETE https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/{user-id}/$ref
-```
-
-### Remove groups from an administrative unit
-
-```http
-DELETE https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/{group-id}/$ref
-```
-
-### Remove devices from an administrative unit
+### Remove users, groups, or devices from an administrative unit
```http
-DELETE https://graph.microsoft.com/beta/administrativeUnits/{admin-unit-id}/members/{device-id}/$ref
+DELETE https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/{member-id}/$ref
``` ## Next steps
active-directory Admin Units Restricted Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-restricted-management.md
+
+ Title: Restricted management administrative units in Azure Active Directory (Preview)
+description: Use restricted management administrative units for more sensitive resources in Azure Active Directory.
+
+documentationcenter: ''
++++++ Last updated : 06/09/2023++++++
+# Restricted management administrative units in Azure Active Directory (Preview)
+
+> [!IMPORTANT]
+> Restricted management administrative units are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Restricted management administrative units allow you to protect specific objects in your tenant from access by anyone other than a specific set of administrators that you designate. This allows you to meet security or compliance requirements without having to remove tenant-level role assignments from your administrators.
+
+## Why use restricted management administrative units?
+
+Here are some reasons why you might use restricted management administrative units to help manage access in your tenant.
+
+- You want to protect your C-level executive accounts and their devices from Helpdesk Administrators who would otherwise be able to reset their passwords or access BitLocker recovery keys. You can add your C-level user accounts in a restricted management administrative unit and enable a specific trusted set of administrators who can reset their passwords and access BitLocker recovery keys when needed.
+- You're implementing a compliance control to ensure that certain resources can only be managed by administrators in a specific country. You can add those resources in a restricted management administrative unit and assign local administrators to manage those objects. Even Global Administrators won't be allowed to modify the objects unless they assign themselves explicitly to a role scoped to the restricted management administrative unit (which is an auditable event).
+- You're using security groups to control access to sensitive applications in your organization, and you donΓÇÖt want to allow your tenant-scoped administrators who can modify groups to be able to control who can access the applications. You can add those security groups to a restricted management administrative unit and then be sure that only the specific administrators you assign can manage them.
+
+> [!NOTE]
+> Placing objects in restricted management administrative units severely restricts who can make changes to the objects. This restriction can cause existing workflows to break.
+
+## What objects can be members?
+
+Here are the objects that can be members of restricted management administrative units.
+
+| Azure AD object type | Administrative unit | Administrative unit with restricted management setting enabled |
+| | :: | :: |
+| Users | Yes | Yes |
+| Devices | Yes | Yes |
+| Groups (Security) | Yes | Yes |
+| Groups (Microsoft 365) | Yes | No |
+| Groups (Mail enabled security) | Yes | No |
+| Groups (Distribution) | Yes | No |
+
+## What types of operations are blocked?
+
+For administrators not explicitly assigned at the restricted management administrative unit scope, operations that directly modify the Azure AD properties of objects in restricted management administrative units are blocked, whereas operations on related objects in Microsoft 365 services aren't affected.
+
+| Operation type | Blocked | Allowed |
+| | :: | :: |
+| Read standard properties like user principal name, user photo | | :heavy_check_mark: |
+| Modify any Azure AD properties of the user, group, or device | :x: | |
+| Delete the user, group, or device | :x: | |
+| Update password for a user | :x: | |
+| Modify owners or members of the group in the restricted management administrative unit | :x: | |
+| Add users, groups, or devices in a restricted management administrative unit to groups in Azure AD | | :heavy_check_mark: |
+| Modify email & mailbox settings in Exchange for the user in the restricted management administrative unit | | :heavy_check_mark: |
+| Apply policies to a device in a restricted management administrative unit using Intune | | :heavy_check_mark: |
+| Add or remove a group as a site owner in SharePoint | | :heavy_check_mark: |
+
+## Who can modify objects?
+
+Only administrators with an explicit assignment at the scope of a restricted management administrative unit can change the Azure AD properties of objects in the restricted management administrative unit.
+
+| User role | Blocked | Allowed |
+| | :: | :: |
+| Global Administrator | :x: | |
+| Tenant-scoped administrators (including Global Administrator) | :x: | |
+| Administrators assigned at the scope of restricted management administrative unit | | :heavy_check_mark: |
+| Administrators assigned at the scope of another restricted management administrative unit of which the object is a member | | :heavy_check_mark: |
+| Administrators assigned at the scope of another regular administrative unit of which the object is a member | :x: | |
+| Groups Administrator, User Administrator, and other role assigned at the scope of a resource | :x: | |
+| Owners of groups or devices added to restricted management administrative units | :x: | |
+
+## Limitations
+
+Here are some of the limits and constraints for restricted management administrative units.
+
+- The restricted management setting must be applied during administrative unit creation and can't be changed once the administrative unit is created.
+- Groups in a restricted management administrative unit can't be managed with [Azure AD Privileged Identity Management](../privileged-identity-management/groups-discover-groups.md).
+- Role-assignable groups, when added to a restricted management administrative unit, can't have their membership modified. Group owners aren't allowed to manage groups in restricted management administrative units and only Global Administrators and Privileged Role Administrators (neither of which can be assigned at administrative unit scope) can modify membership.
+- Certain actions may not be possible when an object is in a restricted management administrative unit, if the required role isn't one of the roles that can be assigned at administrative unit scope. For example, a Global Administrator in a restricted management administrative unit can't have their password reset by any other administrator in the system, because there's no admin role that can be assigned at the administrative unit scope that can reset the password of a Global Administrator. In such scenarios, the Global Administrator would need to be removed from the restricted management administrative unit first, and then have their password reset by another Global Administrator or Privileged Role Administrator.
+- When deleting a restricted management administrative unit, it can take up to 30 minutes to remove all protections from the former members.
+
+## Programmability
+
+Applications can't modify objects in restricted management administrative units by default. To grant an application access to manage objects in a restricted management administrative unit, you must assign the *Directory.Write.Restricted* [permission in Microsoft Graph](/graph/permissions-reference?branch=main#directory-permissions).
+
+## License requirements
+
+Restricted management administrative units require an Azure AD Premium P1 license for each administrative unit administrator, and Azure AD Free licenses for administrative unit members. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+
+## Next steps
+
+- [Create, update, or delete administrative units](admin-units-manage.md)
+- [Add users or groups to an administrative unit](admin-units-members-add.md)
+- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md
Managing devices in Intune is *not* supported at this time.
## Next steps - [Create or delete administrative units](admin-units-manage.md)-- [Add users, groups, or devices to an administrative unit](admin-units-members-add.md)-- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)
+- [Restricted management administrative units](admin-units-restricted-management.md)
- [Administrative unit limits](../enterprise-users/directory-service-limits-restrictions.md?context=%2fazure%2factive-directory%2froles%2fcontext%2fugr-context)
active-directory Custom User Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-user-permissions.md
Title: User management permissions for Azure AD custom roles (preview)
+ Title: User management permissions for Azure AD custom roles
description: User management permissions for Azure AD custom roles in the Azure portal, PowerShell, or Microsoft Graph API.
Previously updated : 10/26/2022 Last updated : 06/09/2023
-# User management permissions for Azure AD custom roles (preview)
-
-> [!IMPORTANT]
-> User management permissions for Azure AD custom roles is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# User management permissions for Azure AD custom roles
User management permissions can be used in custom role definitions in Azure Active Directory (Azure AD) to grant fine-grained access such as the following: - Read or update basic properties of users-- Read or update identity of users
+- Read identity of users
- Read or update job information of users - Update contact information of users - Update parental controls of users
The following permissions are available to read or update basic properties of us
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/standard/read | Read basic properties on users. |
-> | microsoft.directory/users/basic/update | Update basic properties on users. |
+> | microsoft.directory/users/standard/read | Read basic properties on users |
+> | microsoft.directory/users/basic/update | Update basic properties on users |
-## Read or update identity of users
+## Read identity of users
-The following permissions are available to read or update identity of users.
+The following permissions are available to read identity of users.
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/identities/read | Read identities of users. |
-> | microsoft.directory/users/identities/update | Update the identity properties of users, such as name and user principal name. |
+> | microsoft.directory/users/identities/read | Read identities of users |
## Read or update job information of users
The following permissions are available to read or update job information of use
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/manager/read | Read manager of users. |
-> | microsoft.directory/users/manager/update | Update manager for users. |
-> | microsoft.directory/users/jobInfo/update | Update the job info properties of users, such as job title, department, and company name. |
+> | microsoft.directory/users/manager/read | Read manager of users |
+> | microsoft.directory/users/manager/update | Update manager for users |
+> | microsoft.directory/users/jobInfo/update | Update job information of users |
## Update contact information of users
The following permissions are available to update contact information of users.
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/contactInfo/update | Update the contact info properties of users, such as address, phone, and email. |
+> | microsoft.directory/users/contactInfo/update | Update contact properties on users |
## Update parental controls of users
The following permissions are available to update parental controls of users.
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/parentalControls/update | Update parental controls of users. |
+> | microsoft.directory/users/parentalControls/update | Update parental controls of users |
## Update settings of users
The following permissions are available to update settings of users.
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/usageLocation/update | Update usage location of users. |
+> | microsoft.directory/users/usageLocation/update | Update usage location of users |
## Read direct reports of users
The following permissions are available to read direct reports of users.
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/directReports/read | Read the direct reports for users. |
+> | microsoft.directory/users/directReports/read | Read the direct reports for users |
## Update extension properties of users
The following permissions are available to update extension properties of users.
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/extensionProperties/update | Update extension properties of users. |
+> | microsoft.directory/users/extensionProperties/update | Update extension properties of users |
## Read device information of users
The following permissions are available to read device information of users.
> | - | -- | > | microsoft.directory/users/ownedDevices/read | Read owned devices of users | > | microsoft.directory/users/registeredDevices/read | Read registered devices of users |
-> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users. |
+> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users |
## Read or manage licenses of users
The following permissions are available to read or manage licenses of users.
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/licenseDetails/read | Read license details of users. |
-> | microsoft.directory/users/assignLicense | Manage user licenses. |
-> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users. |
+> | microsoft.directory/users/licenseDetails/read | Read license details of users |
+> | microsoft.directory/users/assignLicense | Manage user licenses |
+> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users |
## Update password policies of users
The following permissions are available to update password policies of users.
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/passwordPolicies/update | Update password policies properties of users. |
+> | microsoft.directory/users/passwordPolicies/update | Update password policies properties of users |
## Read assignments and memberships of users
The following permissions are available to read assignments and memberships of u
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/users/appRoleAssignments/read | Read application role assignments for users. |
-> | microsoft.directory/users/assignLicense | Manage user licenses. |
-> | microsoft.directory/users/basic/update | Update basic properties on users. |
-> | microsoft.directory/users/contactInfo/update | Update the contact info properties of users, such as address, phone, and email. |
-> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users. |
-> | microsoft.directory/users/directReports/read | Read the direct reports for users. |
-> | microsoft.directory/users/extensionProperties/update | Update extension properties of users. |
-> | microsoft.directory/users/identities/read | Read identities of users. |
-> | microsoft.directory/users/identities/update | Update the identity properties of users, such as name and user principal name. |
-> | microsoft.directory/users/jobInfo/update | Update the job info properties of users, such as job title, department, and company name. |
-> | microsoft.directory/users/licenseDetails/read | Read license details of users. |
-> | microsoft.directory/users/manager/read | Read manager of users. |
-> | microsoft.directory/users/manager/update | Update manager for users. |
-> | microsoft.directory/users/memberOf/read | Read the group memberships of users. |
-> | microsoft.directory/users/ownedDevices/read | Read owned devices of users. |
-> | microsoft.directory/users/parentalControls/update | Update parental controls of users. |
-> | microsoft.directory/users/passwordPolicies/update | Update password policies properties of users. |
-> | microsoft.directory/users/registeredDevices/read | Read registered devices of users. |
-> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users. |
-> | microsoft.directory/users/scopedRoleMemberOf/read | Read user's membership of an Azure AD role, that is scoped to an administrative unit. |
-> | microsoft.directory/users/standard/read | Read basic properties on users. |
-> | microsoft.directory/users/usageLocation/update | Update usage location of users. |
+> | microsoft.directory/users/appRoleAssignments/read | Read application role assignments for users |
+> | microsoft.directory/users/assignLicense | Manage user licenses |
+> | microsoft.directory/users/basic/update | Update basic properties on users |
+> | microsoft.directory/users/contactInfo/update | Update contact properties on users |
+> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users |
+> | microsoft.directory/users/directReports/read | Read the direct reports for users |
+> | microsoft.directory/users/extensionProperties/update | Update extension properties of users |
+> | microsoft.directory/users/identities/read | Read identities of users |
+> | microsoft.directory/users/jobInfo/update | Update job information of users |
+> | microsoft.directory/users/licenseDetails/read | Read license details of users |
+> | microsoft.directory/users/manager/read | Read manager of users |
+> | microsoft.directory/users/manager/update | Update manager for users |
+> | microsoft.directory/users/memberOf/read | Read the group memberships of users |
+> | microsoft.directory/users/ownedDevices/read | Read owned devices of users |
+> | microsoft.directory/users/parentalControls/update | Update parental controls of users |
+> | microsoft.directory/users/passwordPolicies/update | Update password policies properties of users |
+> | microsoft.directory/users/registeredDevices/read | Read registered devices of users |
+> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users |
+> | microsoft.directory/users/scopedRoleMemberOf/read | Read user's membership of an Azure AD role, that is scoped to an administrative unit |
+> | microsoft.directory/users/standard/read | Read basic properties on users |
+> | microsoft.directory/users/usageLocation/update | Update usage location of users |
## Next steps
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 04/28/2023 Last updated : 06/08/2023
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/applications/verification/update | Update applicationsverification property | > | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/connectors/create | Create application proxy connectors | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors | > | microsoft.directory/connectorGroups/create | Create application proxy connector groups |
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/applications/verification/update | Update applicationsverification property | > | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/deletedItems.applications/delete | Permanently delete applications, which can no longer be restored | > | microsoft.directory/deletedItems.applications/restore | Restore soft deleted applications to original state | > | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
Users in this role can enable, disable, and delete devices in Azure AD and read
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/deletedItems.devices/delete | Permanently delete devices, which can no longer be restored |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/applications/allProperties/allTasks | Create and delete applications, and read and update all properties | > | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/users/authenticationMethods/create | Create authentication methods for users | > | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users | > | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/verifiableCredentials/configuration/allProperties/read | Read configuration required to create and manage verifiable credentials | > | microsoft.directory/verifiableCredentials/configuration/allProperties/update | Update configuration required to create and manage verifiable credentials | > | microsoft.directory/lifecycleWorkflows/workflows/allProperties/allTasks | Manage all aspects of lifecycle workflows and tasks in Azure AD |
+> | microsoft.directory/pendingExternalUserProfiles/create | Create external user profiles in the extended directory for Teams |
+> | microsoft.directory/pendingExternalUserProfiles/standard/read | Read standard properties of external user profiles in the extended directory for Teams |
+> | microsoft.directory/pendingExternalUserProfiles/basic/update | Update basic properties of external user profiles in the extended directory for Teams |
+> | microsoft.directory/pendingExternalUserProfiles/delete | Delete external user profiles in the extended directory for Teams |
+> | microsoft.directory/externalUserProfiles/standard/read | Read standard properties of external user profiles in the extended directory for Teams |
+> | microsoft.directory/externalUserProfiles/basic/update | Update basic properties of external user profiles in the extended directory for Teams |
+> | microsoft.directory/externalUserProfiles/delete | Delete external user profiles in the extended directory for Teams |
> | microsoft.azure.advancedThreatProtection/allEntities/allTasks | Manage all aspects of Azure Advanced Threat Protection | > | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
Users with this role **cannot** do the following:
> | microsoft.directory/appConsent/appConsentRequests/allProperties/read | Read all properties of consent requests for applications registered with Azure AD | > | microsoft.directory/applications/allProperties/read | Read all properties (including privileged properties) on all types of applications | > | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/users/authenticationMethods/standard/restrictedRead | Read standard properties of authentication methods that do not include personally identifiable information for users | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
Users with this role **cannot** do the following:
> | microsoft.directory/domains/allProperties/read | Read all properties of domains | > | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management |
+> | microsoft.directory/externalUserProfiles/standard/read | Read standard properties of external user profiles in the extended directory for Teams |
> | microsoft.directory/groups/allProperties/read | Read all properties (including privileged properties) on Security groups and Microsoft 365 groups, including role-assignable groups | > | microsoft.directory/groupSettings/allProperties/read | Read all properties of group settings | > | microsoft.directory/groupSettingTemplates/allProperties/read | Read all properties of group setting templates |
Users with this role **cannot** do the following:
> | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations | > | microsoft.directory/oAuth2PermissionGrants/allProperties/read | Read all properties of OAuth 2.0 permission grants | > | microsoft.directory/organization/allProperties/read | Read all properties for an organization |
+> | microsoft.directory/pendingExternalUserProfiles/standard/read | Read standard properties of external user profiles in the extended directory for Teams |
> | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | > | microsoft.directory/policies/allProperties/read | Read all properties of policies | > | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of conditional access policies |
Users in this role can create, manage and deploy provisioning configuration setu
> | microsoft.directory/applications/tag/update | Update tags of applications | > | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/cloudProvisioning/allProperties/allTasks | Read and configure all properties of Azure AD Cloud Provisioning service. | > | microsoft.directory/deletedItems.applications/delete | Permanently delete applications, which can no longer be restored | > | microsoft.directory/deletedItems.applications/restore | Restore soft deleted applications to original state |
Assign the Lifecycle Workflows Administrator role to users who need to do the fo
> | Actions | Description | > | | | > | microsoft.directory/lifecycleWorkflows/workflows/allProperties/allTasks | Manage all aspects of lifecycle workflows and tasks in Azure AD |
+> | microsoft.directory/organization/strongAuthentication/read | Read strong authentication properties of an organization |
## Message Center Privacy Reader
Users with this role can view usage reporting data and the reports dashboard in
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | Actions | Description | > | | | > | microsoft.directory/applications/policies/update | Update policies of applications |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
Users with this role can manage alerts and have global read-only access on secur
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Defender for Cloud Apps | > | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection |
In | Can do
> | Actions | Description | > | | | > | microsoft.directory/accessReviews/definitions/allProperties/read | Read all properties of access reviews of all reviewable resources in Azure AD |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/deviceLocalCredentials/standard/read | Read all properties of the backed up local administrator account credentials for Azure AD joined devices, except the password |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/pendingExternalUserProfiles/create | Create external user profiles in the extended directory for Teams |
+> | microsoft.directory/pendingExternalUserProfiles/standard/read | Read standard properties of external user profiles in the extended directory for Teams |
+> | microsoft.directory/pendingExternalUserProfiles/basic/update | Update basic properties of external user profiles in the extended directory for Teams |
+> | microsoft.directory/pendingExternalUserProfiles/delete | Delete external user profiles in the extended directory for Teams |
+> | microsoft.directory/externalUserProfiles/standard/read | Read standard properties of external user profiles in the extended directory for Teams |
+> | microsoft.directory/externalUserProfiles/basic/update | Update basic properties of external user profiles in the extended directory for Teams |
+> | microsoft.directory/externalUserProfiles/delete | Delete external user profiles in the extended directory for Teams |
## Teams Communications Administrator
Privileged Role Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: |
Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
User Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
Privileged Role Admin | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark
Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
User Admin | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
active-directory Oracle Cloud Infrastructure Console Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md
Title: 'Tutorial: Configure Oracle Cloud Infrastructure Console for automatic user provisioning with Azure Active Directory'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to Oracle Cloud Infrastructure Console.
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Oracle Cloud Infrastructure Console.
writer: twimmers
> [!NOTE] > Integrating with Oracle Cloud Infrastructure Console or Oracle IDCS with a custom / BYOA application is not supported. Using the gallery application as described in this tutorial is supported. The gallery application has been customized to work with the Oracle SCIM server.
-This tutorial describes the steps you need to perform in both Oracle Cloud Infrastructure Console and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Oracle Cloud Infrastructure Console](https://www.oracle.com/cloud/free/?source=:ow:o:p:nav:0916BCButton&intcmp=:ow:o:p:nav:0916BCButton) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Oracle Cloud Infrastructure Console and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Oracle Cloud Infrastructure Console](https://www.oracle.com/cloud/free/?source=:ow:o:p:nav:0916BCButton&intcmp=:ow:o:p:nav:0916BCButton) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
This tutorial describes the steps you need to perform in both Oracle Cloud Infra
The scenario outlined in this tutorial assumes that you already have the following prerequisites: * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* An Oracle Cloud Infrastructure Console [tenant](https://www.oracle.com/cloud/sign-in.html?intcmp=OcomFreeTier&source=:ow:o:p:nav:0916BCButton). * A user account in Oracle Cloud Infrastructure Console with Admin permissions.
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Oracle Cloud Infrastructure Console to support provisioning with Azure AD
-1. Login to Oracle Cloud Infrastructure Console's admin portal. On the top left corner of the screen navigate to **Identity > Federation**.
+1. Log on to the Oracle Cloud Infrastructure Console admin portal. On the top left corner of the screen navigate to **Identity > Federation**.
![Oracle Admin](./media/oracle-cloud-infratstructure-console-provisioning-tutorial/identity.png)
The scenario outlined in this tutorial assumes that you already have the followi
![Oracle URL](./media/oracle-cloud-infratstructure-console-provisioning-tutorial/url.png)
-3. Click on **Add Identity Provider** to create a new identity provider. Save the IdP id to be used as a part of tenant URL.Click on plus icon beside the **Applications** tab to create an OAuth Client and Grant IDCS Identity Domain Administrator AppRole.
+3. Click on **Add Identity Provider** to create a new identity provider. Save the IdP ID to be used as a part of tenant URL. Select the plus icon beside the **Applications** tab to create an OAuth Client and Grant IDCS Identity Domain Administrator AppRole.
![Oracle Cloud Icon](./media/oracle-cloud-infratstructure-console-provisioning-tutorial/add.png)
-4. Follow the screenshots below to configure your application. Once the configuration is done click on **Save**.
+4. Follow the screenshots below to configure your application. When the configuration is done, select **Save**.
![Oracle Configuration](./media/oracle-cloud-infratstructure-console-provisioning-tutorial/configuration.png)
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:oracle:idcs:extension:user:User:isFederatedUser|Boolean| > [!NOTE]
-> Additional extension attributes must begin with "urn:ietf:params:scim:api:"
+> The extension attributes "urn:ietf:params:scim:schemas:oracle:idcs:extension:user:User:bypassNotification" and "urn:ietf:params:scim:schemas:oracle:idcs:extension:user:User:isFederatedUser" are the only custom extension attributes supported.
10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Oracle Cloud Infrastructure Console**.
active-directory Hipaa Configure For Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-configure-for-compliance.md
+
+ Title: Configure Azure Active Directory for HIPAA compliance
+description: Introduction for guidance on how to configure Azure Active Directory for HIPAA compliance level.
+++++++++ Last updated : 04/13/2023++++
+# Configuring Azure Active Directory for HIPAA compliance
+
+Microsoft services such as Azure Active Directory (Azure AD) can help you meet identity-related requirements for the Health Insurance Portability and Accountability Act of 1996 (HIPAA).
+
+The HIPAA Security Rule (HSR) establishes standards to protect individualsΓÇÖ electronic personal health information that is created, received, used, or maintained by a covered entity. The HSR is managed by the U.S. Department of Health and Human Services (HHS) and requires appropriate administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information.
+
+Technical safeguards requirements and objectives are defined in Title 45 of the Code of Federal Regulations (CFRs). Part 160 of Title 45 provides the general administrative requirements, and Part 164ΓÇÖs subparts A and C describe the security and privacy requirements.
+
+Subpart § 164.304 defines technical safeguards as the technology and the policies and procedures for its use that protect electronic protected health information and control access to it. The HHS also outlines key areas for healthcare organizations to consider when implementing HIPAA technical safeguards. From [§ 164.312 Technical safeguards](https://www.ecfr.gov/current/title-45/section-164.312):
+
+* **Access controls** - Implement technical policies and procedures for electronic information systems that maintain electronic protected health information to allow access only to those persons or software programs that have been granted access rights as specified in [§ 164.308(a)(4)](https://www.ecfr.gov/current/title-45/section-164.308).
+
+* **Audit controls** - Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.
+
+* **Integrity controls** - Implement policies and procedures to protect electronic protected health information from improper alteration or destruction.
+
+* **Person or entity authentication** - Implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.
+
+* **Transmission security** - Implement technical security measures to guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network.
+
+The HSR defines subparts as standard, along with required and addressable implementation specifications. All must be implemented. The "addressable" designation denotes a specification is reasonable and appropriate. Addressable doesn't mean that an implementation specification is optional. Therefore, subparts that are defined as addressable are also required.
+
+The remaining articles in this series provide guidance and links to resources, organized by key areas and technical safeguards. For each key area, there's a table with the relevant safeguards listed, and links to Azure Active Directory (Azure AD) guidance to accomplish the safeguard.
+
+## Learn more
+
+* [HHS Zero Trust in Healthcare pdf](https://www.hhs.gov/sites/default/files/zero-trust.pdf)
+
+* [Combined regulation text](https://www.hhs.gov/ocr/privacy/hipaa/administrative/combined/https://docsupdatetracker.net/index.html?language=es) of all HIPAA Administrative Simplification Regulations found at 45 CFR 160, 162, and 164
+
+* [Code of Federal Regulations (CFR) Title 45](https://www.ecfr.gov/current/title-45) describing the public welfare portion of the regulation
+
+* [Part 160](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-160?toc=1) describing the general administrative requirements of Title 45
+
+* [Part 164](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164) Subparts A and C describing the security and privacy requirements of Title 45
+
+* [HIPAA Security Risk Safeguard Tool](https://www.healthit.gov/providers-professionals/security-risk-assessment-tool)
+
+* [NIST HSR Toolkit](http://scap.nist.gov/hipaa/)
+
+## Next steps
+
+* [Access Controls Safeguard guidance](hipaa-access-controls.md)
+
+* [Audit Controls Safeguard guidance](hipaa-audit-controls.md)
+
+* [Other Safeguard guidance](hipaa-other-controls.md)
active-directory Pci Dss Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-dss-guidance.md
+
+ Title: Azure Active Directory PCI-DSS guidance
+description: Guidance on meeting payment card industry (PCI) compliance with Azure AD
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory PCI-DSS guidance
+
+The Payment Card Industry Security Standards Council (PCI SSC) is responsible for developing and promoting data security standards and resources, including the Payment Card Industry Data Security Standard (PCI-DSS), to ensure the security of payment transactions. To achieve PCI compliance, organizations using Azure Active Directory (Azure AD) can refer to guidance in this document. However, it is the responsibility of the organizations to ensure their PCI compliance. Their IT teams, SecOps teams, and Solutions Architects are responsible for creating and maintaining secure systems, products, and networks that handle, process, and store payment card information.
+
+While Azure AD helps meet some PCI-DSS control requirements, and provides modern identity and access protocols for cardholder data environment (CDE) resources, it should not be the sole mechanism for protecting cardholder data. Therefore, review this document set and all PCI-DSS requirements to establish a comprehensive security program that preserves customer trust. For a complete list of requirements, please visit the official PCI Security Standards Council website at pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf)
+
+## PCI requirements for controls
+
+The global PCI-DSS v4.0 establishes a baseline of technical and operational standards for protecting account data. It ΓÇ£was developed to encourage and enhance payment card account data security and facilitate the broad adoption of consistent data security measures, globally. It provides a baseline of technical and operational requirements designed to protect account data. While specifically designed to focus on environments with payment card account data, PCI-DSS can also be used to protect against threats and secure other elements in the payment ecosystem.ΓÇ¥
+
+## Azure AD configuration and PCI-DSS
+
+This document serves as a comprehensive guide for technical and business leaders who are responsible for managing identity and access management (IAM) with Azure Active Directory (Azure AD) in compliance with the Payment Card Industry Data Security Standard (PCI DSS). By following the key requirements, best practices, and approaches outlined in this document, organizations can reduce the scope, complexity, and risk of PCI noncompliance, while promoting security best practices and standards compliance. The guidance provided in this document aims to help organizations configure Azure AD in a way that meets the necessary PCI DSS requirements and promotes effective IAM practices.
+
+Technical and business leaders can use the following guidance to fulfill responsibilities for identity and access management (IAM) with Azure AD. For more information on PCI-DSS in other Microsoft workloads, see [Overview of the Microsoft cloud security benchmark (v1)](/security/benchmark/azure/overview).
+
+PCI-DSS requirements and testing procedures consist of 12 principal requirements that ensure the secure handling of payment card information. Together, these requirements are a comprehensive framework that helps organizations secure payment card transactions and protect sensitive cardholder data.
+
+Azure AD is an enterprise identity service that secures applications, systems, and resources to support PCI-DSS compliance. The following table has the PCI principal requirements and links to Azure AD recommended controls for PCI-DSS compliance.
+
+## Principal PCI-DSS requirements
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't addressed or met by Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+|PCI Data Security Standard - High Level Overview|Azure AD recommended PCI-DSS controls|
+|-|-|
+|Build and Maintain Secure Network and Systems|[1. Install and Maintain Network Security Controls]() </br> [2. Apply Secure Configurations to All System Components]()|
+|Protect Account Data|3. Protect Stored Account Data </br> 4. Protect Cardholder Data with Strong Cryptography During Transmission Over Public Networks|
+|Maintain a Vulnerability Management Program|[5. Protect All Systems and Networks from Malicious Software]() </br> [6. Develop and Maintain Secure Systems and Software]()|
+|Implement Strong Access Control Measures|[7. Restrict Access to System Components and Cardholder Data by Business Need to Know]() </br> [8. Identify and Authenticate Access to System Components]() </br> 9. Restrict Physical Access to System Components and Cardholder Data|
+|Regularly Monitor and Test Networks|[10. Log and Monitor All Access to System Components and Cardholder Data]() </br> [11. Test Security of Systems and Networks Regularly]()|
+|Maintain an Information Security Policy|12. Support Information Security with Organizational Policies and Programs|
+
+## PCI-DSS applicability
+
+PCI-DSS applies to organizations that store, process, or transmit cardholder data (CHD) and/or sensitive authentication data (SAD). These data elements, considered together, are known as account data. PCI-DSS provides security guidelines and requirements for organizations that affect the cardholder data environment (CDE). Entities safeguarding CDE ensures the confidentiality and security of customer payment information.
+
+CHD consists of:
+
+* **Primary account number (PAN)** - a unique payment card number (credit, debit, or prepaid cards, etc.) that identifies the issuer and the cardholder account
+* **Cardholder name** ΓÇô the card owner
+* **Card expiration date** ΓÇô the day and month the card expires
+* **Service code** - a three- or four-digit value in the magnetic stripe that follows the expiration date of the payment card on the track data. It defines service attributes, differentiating between international and national/regional interchange, or identifying usage restrictions.
+
+SAD consists of security-related information used to authenticate cardholders and/or authorize payment card transactions. SAD includes, but isn't limited to:
+
+* **Full track data** - magnetic stripe or chip equivalent
+* **Card verification codes/values** - also referred to as the card validation code (CVC), or value (CVV). ItΓÇÖs the three- or four-digit value on the front or back of the payment card. ItΓÇÖs also referred to as CAV2, CVC2, CVN2, CVV2 or CID, determined by the participating payment brands (PPB).
+* **PIN** - personal identification number
+ * **PIN blocks** - an encrypted representation of the PIN used in a debit or credit card transaction. It ensures the secure transmission of sensitive information during a transaction
+
+Protecting the CDE is essential to the security and confidentiality of customer payment information and helps:
+
+* **Preserve customer trust** - customers expect their payment information to be handled securely and kept confidential. If a company experiences a data breach that results in the theft of customer payment data, it can degrade customer trust in the company and cause reputational damage.
+* **Comply with regulations** - companies processing credit card transactions are required to comply with the PCI-DSS. Failure to comply results in fines, legal liabilities, and resultant reputational damage.
+* **Financial risk mitigation** -data breaches have significant financial effects, including, costs for forensic investigations, legal fees, and compensation for affected customers.
+* **Business continuity** - data breaches disrupt business operations and might affect credit card transaction processes. This scenario might lead to lost revenue, operational disruptions, and reputational damage.
+
+## PCI audit scope
+
+PCI audit scope relates to the systems, networks, and processes in the storage, processing, or transmission of CHD and/or SAD. If Account Data is stored, processed, or transmitted in a cloud environment, PCI-DSS applies to that environment and compliance typically involves validation of the cloud environment and the usage of it. There are five fundamental elements in scope for a PCI audit:
+
+* **Cardholder data environment (CDE)** - the area where CHD, and/or SAD, is stored, processed, or transmitted. It includes an organizationΓÇÖs components that touch CHD, such as networks, and network components, databases, servers, applications, and payment terminals.
+* **People** - with access to the CDE, such as employees, contractors, and third-party service providers, are in the scope of a PCI audit.
+* **Processes** - that involve CHD, such as authorization, authentication, encryption and storage of account data in any format, are within the scope of a PCI audit.
+* **Technology** - that processes, stores, or transmits CHD, including hardware such as printers, and multi-function devices that scan, print and fax, end-user devices such as computers, laptops workstations, administrative workstations, tablets and mobile devices, software, and other IT systems, are in the scope of a PCI audit.
+* **System components** ΓÇô that might not store, process, or transmit CHD/SAD but have unrestricted connectivity to system components that store, process, or transmit CHD/SAD, or that could effect the security of the CDE.
+
+If PCI scope is minimized, organizations can effectively reduce the effects of security incidents and lower the risk of data breaches. Segmentation can be a valuable strategy for reducing the size of the PCI CDE, resulting in reduced compliance costs and overall benefits for the organization including but not limited to:
+
+* **Cost savings** - by limiting audit scope, organizations reduce time, resources, and expenses to undergo an audit, which leads to cost savings.
+* **Reduced risk exposure** - a smaller PCI audit scope reduces potential risks associated with processing, storing, and transmitting cardholder data. If the number of systems, networks, and applications subject to an audit are limited, organizations focus on securing their critical assets and reducing their risk exposure.
+* **Streamlined compliance** - narrowing audit scope makes PCI-DSS compliance more manageable and streamlined. Results are more efficient audits, fewer compliance issues, and a reduced risk of incurring noncompliance penalties.
+* **Improved security posture** - with a smaller subset of systems and processes, organizations allocate security resources and efforts efficiently. Outcomes are a stronger security posture, as security teams concentrate on securing critical assets and identifying vulnerabilities in a targeted and effective manner.
+
+## Strategies to reduce PCI audit scope
+
+An organizationΓÇÖs definition of its CDE determines PCI audit scope. Organizations document and communicate this definition to the PCI-DSS Qualified Security Assessor (QSA) performing the audit. The QSA assesses controls for the CDE to determine compliance.
+Adherence to PCI standards and use of effective risk mitigation helps businesses protect customer personal and financial data, which maintains trust in their operations. The following section outlines strategies to reduce risk in PCI audit scope.
+
+### Tokenization
+
+Tokenization is a data security technique. Use tokenization to replace sensitive information, such as credit card numbers, with a unique token stored and used for transactions, without exposing sensitive data. Tokens reduce the scope of a PCI audit for the following requirements:
+
+* **Requirement 3** - Protect Stored Account Data
+* **Requirement 4** - Protect Cardholder Data with strong Cryptography During Transmission Over Open Public Networks
+* **Requirement 9** - Restrict Physical Access to Cardholder Data
+* **Requirement 10** - Log and Monitor All Access to Systems Components and Cardholder Data.
+
+When using cloud-based processing methodologies, consider the relevant risks to sensitive data and transactions. To mitigate these risks, it's recommended you implement relevant security measures and contingency plans to protect data and prevent transaction interruptions. As a best practice, use payment tokenization as a methodology to declassify data, and potentially reduce the footprint of the CDE. With payment tokenization, sensitive data is replaced with a unique identifier that reduces the risk of data theft and limits the exposure of sensitive information in the CDE.
+
+### Secure CDE
+
+PCI-DSS requires organizations to maintain a secure CDE. With effectively configured CDE, businesses can mitigate their risk exposure and reduce the associated costs for both on-premises and cloud environments. This approach helps minimize the scope of a PCI audit, making it easier and more cost-effective to demonstrate compliance with the standard.
+
+To configure Azure AD to secure the CDE:
+
+* Use passwordless credentials for users: Windows Hello for Business, FIDO2 security keys, and Microsoft Authenticator app
+* Use strong credentials for workload identities: certificates and managed identities for Azure resources.
+ * Integrate access technologies such as VPN, remote desktop, and network access points with Azure AD for authentication, if applicable
+* Enable privileged identity management and access reviews for Azure AD roles, privileged access groups and Azure resources
+* Use Conditional Access policies to enforce PCI-requirement controls: credential strength, device state, and enforce them based on location, group membership, applications, and risk
+* Use modern authentication for DCE workloads
+* Archive Azure AD logs in security information and event management (SIEM) systems
+
+Where applications and resources use Azure AD for identity and access management (IAM), the Azure AD tenant(s) are in scope of PCI audit, and the guidance herein is applicable. Organizations must evaluate identity and resource isolation requirements, between non-PCI and PCI workloads, to determine their best architecture.
+
+Learn more
+
+* [Introduction to delegated administration and isolated environments](../fundamentals/secure-introduction.md)
+* [How to use the Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc)
+* [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
+* [What are access reviews?](../governance/access-reviews-overview.md)
+* [What is Conditional Access?](../conditional-access/overview.md)
+* [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md)
+
+### Establish a responsibility matrix
+
+PCI compliance is the responsibility of entities that process payment card transactions including but not limited to:
+
+* Merchants
+* Card service providers
+* Merchant service providers
+* Acquiring banks
+* Payment processors
+* Payment card issuers
+* Hardware vendors
+
+These entities ensure payment card transactions are processed securely and are PCI-DSS compliant. All entities involved in payment card transactions have a role to help ensure PCI compliance.
+
+Azure PCI DSS compliance status doesn't automatically translate to PCI-DSS validation for the services you build or host on Azure. You ensure that you achieve compliance with PCI-DSS requirements.
+
+### Establish continuous processes to maintain compliance
+
+Continuous processes entail ongoing monitoring and improvement of compliance posture. Benefits of continuous processes to maintain PCI compliance:
+
+* Reduced risk of security incidents and noncompliance
+* Improved data security
+* Better alignment with regulatory requirements
+* Increased customer and stakeholder confidence
+
+With ongoing processes, organizations respond effectively to changes in the regulatory environment and ever-evolving security threats.
+
+* **Risk assessment** ΓÇô conduct this process to identify credit-card data vulnerabilities and security risks. Identify potential threats, assess the likelihood threats occurring, and evaluate the potential effects on the business.
+* **Security awareness training** - employees who handle credit card data receive regular security awareness training to clarify the importance of protecting cardholder data and the measures to do so.
+* **Vulnerability management** -conduct regular vulnerability scans and penetration testing to identify network or system weaknesses exploitable by attackers.
+* **Monitor and maintain access control policies** - access to credit card data is restricted to authorized individuals. Monitor access logs to identify unauthorized access attempts.
+* **Incident response** ΓÇô an incident response plan helps security teams take action during security incidents involving credit card data. Identify incident cause, contain the damage, and restore normal operations in a timely manner.
+* **Compliance monitoring** - and auditing is conducted to ensure ongoing compliance with PCI-DSS requirements. Review security logs, conduct regular policy reviews, and ensure system components are accurately configured and maintained.
+
+### Implement strong security for shared infrastructure
+
+Typically, web services such as Azure, have a shared infrastructure wherein customer data might be stored on the same physical server or data storage device. This scenario creates the risk of unauthorized customers accessing data they donΓÇÖt own, and the risk of malicious actors targeting the shared infrastructure. Azure AD security features help mitigate risks associated with shared infrastructure:
+
+* User authentication to network access technologies that support modern authentication protocols: virtual private network (VPN), remote desktop, and network access points.
+* Access control policies that enforce strong authentication methods and device compliance based on signals such as user context, device, location, and risk.
+* Conditional Access provides an identity-driven control plane and brings signals together, to make decisions, and enforce organizational policies.
+* Privileged role governance - access reviews, just-in-time (JIT) activation, etc.
+
+Learn more: [What is Conditional Access?](../conditional-access/overview.md)
+
+### Data residency
+
+PCI-DSS cites no specific geographic location for credit card data storage. However, it requires cardholder data is stored securely, which might include geographic restrictions, depending on the organization's security and regulatory requirements. Different countries and regions have data protection and privacy laws. Consult with a legal or compliance advisor to determine applicable data residency requirements.
+
+Learn more: [Azure AD and data residency](../fundamentals/data-residency.md)
+
+### Third-party security risks
+
+A non-PCI compliant third-party provider poses a risk to PCI compliance. Regularly assess and monitor third-party vendors and service providers to ensure they maintain required controls to protect cardholder data.
+
+Azure AD features and functions in **Data residency** help mitigate risks associated with third-party security.
+
+### Logging and monitoring
+
+Implement accurate logging and monitoring to detect, and respond to, security incidents in a timely manner. Azure AD helps manage PCI compliance with audit and activity logs, and reports that can be integrated with a SIEM system. Azure AD has role -based access control (RBAC) and MFA to secure access to sensitive resources, encryption, and threat protection features to protect organizations from unauthorized access and data theft.
+
+Learn more:
+
+ΓÇó [What are Azure AD reports?](../reports-monitoring/overview-reports.md)
+ΓÇó [Azure AD built-in roles](../roles/permissions-reference.md)
+
+### Multi-application environments: host outside the CDE
+
+PCI-DSS ensures that companies that accept, process, store, or transmit credit card information maintain a secure environment. Hosting outside the CDE introduces risks such as:
+
+* Poor access control and identity management might result in unauthorized access to sensitive data and systems
+* Insufficient logging and monitoring of security events impedes detection and response to security incidents
+* Insufficient encryption and threat protection increases the risk of data theft and unauthorized access
+* Poor, or no security awareness and training for users might result in avoidable social engineering attacks, such as phishing
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md) (You're here)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
+
active-directory Pci Dss Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-dss-mfa.md
+
+ Title: Azure Active Directory PCI-DSS Multi-Factor Authentication guidance
+description: Learn the authentication methods supported by Azure AD to meet PCI MFA requirements
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory PCI-DSS Multi-Factor Authentication guidance
+**Information Supplement: Multi-Factor Authentication v 1.0**
+
+Use the following table of authentication methods supported by Azure Active Directory (Azure AD) to meet requirements in the PCI Security Standards Council [Information Supplement, Multi-Factor Authentication v 1.0](https://listings.pcisecuritystandards.org/pdfs/Multi-Factor-Authentication-Guidance-v1.pdf).
+
+|Method|To meet requirements|Protection|MFA element|
+|-|-|-|-|
+|[Passwordless phone sign in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md)|Something you have (device with a key), something you know or are (PIN or biometric) </br> In iOS, Authenticator Secure Element (SE) stores the key in Keychain. [Apple Platform Security, Keychain data protection](https://support.apple.com/guide/security/keychain-data-protection-secb0694df1a/web) </br> In Android, Authenticator uses Trusted Execution Engine (TEE) by storing the key in Keystore. [Developers, Android Keystore system](https://developer.android.com/training/articles/keystore) </br> When users authenticate using Microsoft Authenticator, Azure AD generates a random number the user enters in the app. This action fulfills the out-of-band authentication requirement. |Customers configure device protection policies to mitigate device compromise risk. For instance, Microsoft Intune compliance policies. |Users unlock the key with the gesture, then Azure AD validates the authentication method. |
+|[Windows Hello for Business Deployment Prerequisite Overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification) |Something you have (Windows device with a key), and something you know or are (PIN or biometric). </br> Keys are stored with device Trusted Platform Module (TPM). Customers use devices with hardware TPM 2.0 or later to meet the authentication method independence and out-of-band requirements. </br> [Certified Authenticator Levels](https://fidoalliance.org/certification/authenticator-certification-levels/)|Configure device protection policies to mitigate device compromise risk. For instance, Microsoft Intune compliance policies. |Users unlock the key with the gesture for Windows device sign in.|
+|[Enable passwordless security key sign-in, Enable FIDO2 security key method](../authentication/howto-authentication-passwordless-security-key.md)|Something that you have (FIDO2 security key) and something you know or are (PIN or biometric). </br> Keys are stored with hardware cryptographic features. Customers use FIDO2 keys, at least Authentication Certification Level 2 (L2) to meet the authentication method independence and out-of-band requirement.|Procure hardware with protection against tampering and compromise.|Users unlock the key with the gesture, then Azure AD validates the credential. |
+|[Overview of Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md)|Something you have (smart card) and something you know (PIN). </br> Physical smart cards or virtual smartcards stored in TPM 2.0 or later, are a Secure Element (SE). This action meets the authentication method independence and out-of-band requirement.|Procure smart cards with protection against tampering and compromise.|Users unlock the certificate private key with the gesture, or PIN, then Azure AD validates the credential. |
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md) (You're here)
active-directory Pci Requirement 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-1.md
PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure
To configure Azure AD to comply with PCI-DSS, see the following articles.
-* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) (You're here) * [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) * [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
To configure Azure AD to comply with PCI-DSS, see the following articles.
* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) * [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) * [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
-* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
active-directory Pci Requirement 10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-10.md
PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure
To configure Azure AD to comply with PCI-DSS, see the following articles.
-* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) * [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) * [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
To configure Azure AD to comply with PCI-DSS, see the following articles.
* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) * [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) (You're here) * [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
-* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
active-directory Pci Requirement 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-11.md
PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure
To configure Azure AD to comply with PCI-DSS, see the following articles.
-* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) * [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) * [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
To configure Azure AD to comply with PCI-DSS, see the following articles.
* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) * [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) * [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) (You're here)
-* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
active-directory Pci Requirement 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-2.md
PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure
To configure Azure AD to comply with PCI-DSS, see the following articles.
-* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) * [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) (You're here) * [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
To configure Azure AD to comply with PCI-DSS, see the following articles.
* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) * [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) * [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
-* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
active-directory Pci Requirement 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-5.md
PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure
To configure Azure AD to comply with PCI-DSS, see the following articles.
-* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) * [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) * [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) (You're here)
To configure Azure AD to comply with PCI-DSS, see the following articles.
* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) * [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) * [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
-* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
active-directory Pci Requirement 6 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-6.md
PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicalbe to Azure
To configure Azure AD to comply with PCI-DSS, see the following articles.
-* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) * [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) * [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
To configure Azure AD to comply with PCI-DSS, see the following articles.
* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) * [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) * [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
-* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
active-directory Pci Requirement 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-7.md
PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure
To configure Azure AD to comply with PCI-DSS, see the following articles.
-* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) * [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) * [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
To configure Azure AD to comply with PCI-DSS, see the following articles.
* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) * [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) * [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
-* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
active-directory Pci Requirement 8 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-8.md
## 8.3 Strong authentication for users and administrators is established and managed.
-For more information about Azure AD authentication methods that meet PCI requirements, see: [Information Supplement: Multi-Factor Authentication](azure-ad-pci-dss-mfa.md).
+For more information about Azure AD authentication methods that meet PCI requirements, see: [Information Supplement: Multi-Factor Authentication](pci-dss-mfa.md).
|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| |-|-|
PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure
To configure Azure AD to comply with PCI-DSS, see the following articles.
-* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Azure AD PCI-DSS guidance](pci-dss-guidance.md)
* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) * [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) * [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
To configure Azure AD to comply with PCI-DSS, see the following articles.
* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) (You're here) * [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) * [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
-* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](pci-dss-mfa.md)
aks Cilium Enterprise Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cilium-enterprise-marketplace.md
Designed for platform teams and using the power of eBPF, Isovalent Cilium Enterp
* Enables self-service for monitoring, troubleshooting, and security workflows in Kubernetes. Teams can access current and historical views of flow data, metrics, and visualizations for their specific namespaces. > [!NOTE]
-> If you are upgrading an existing AKS cluster, then it must be created with Azure CNI powered by Cilium. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md).
+> If you are upgrading an existing AKS cluster, then it must be created with Azure CNI powered by Cilium. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS)](azure-cni-powered-by-cilium.md).
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An existing Azure Kubernetes Service (AKS) cluster running Azure CNI powered by Cilium. If you don't have an existing AKS cluster, you can create one from the Azure portal. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md).
+- An existing Azure Kubernetes Service (AKS) cluster running Azure CNI powered by Cilium. If you don't have an existing AKS cluster, you can create one from the Azure portal. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS)](azure-cni-powered-by-cilium.md).
## Deploy Isovalent Cilium Enterprise on Azure Marketplace
You can uninstall an Isovalent Cilium Enterprise offer using the AKS extension d
## Next steps -- [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md)
+- [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS)](azure-cni-powered-by-cilium.md)
-- [What is Azure Kubernetes Service?](intro-kubernetes.md)
+- [What is Azure Kubernetes Service?](intro-kubernetes.md)
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
Title: Migrate from in-tree storage class to CSI drivers on Azure Kubernetes Service (AKS) description: Learn how to migrate from in-tree persistent volume to the Container Storage Interface (CSI) driver in an Azure Kubernetes Service (AKS) cluster. Previously updated : 05/16/2023 Last updated : 05/27/2023
The benefits of this approach are:
* It's simple and can be automated. * No need to clean up original configuration using in-tree storage class. * Low risk as you're only performing a logical deletion of Kubernetes PV/PVC, the actual physical data isn't deleted.
-* No extra costs as the result of not having to create more objects such as disk, snapshots, etc.
+* No extra cost incurred as the result of not having to create additional Azure objects, such as disk, snapshots, etc.
The following are important considerations to evaluate:
Before proceeding, verify the following:
## Migrate File share volumes
-Migration from in-tree to CSI is supported by creating a static volume.
+Migration from in-tree to CSI is supported by creating a static volume:
+* No need to clean up original configuration using in-tree storage class.
+* Low risk as you're only performing a logical deletion of Kubernetes PV/PVC, the actual physical data isn't deleted.
+* No extra cost incurred as the result of not having to create additional Azure objects, such as file shares, etc.
### Migration
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
Previously updated : 01/09/2023 Last updated : 06/08/2023 # Configure the Dapr extension for your Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes project
If no configuration-settings are passed, the Dapr configuration defaults to:
For a list of available options, see [Dapr configuration][dapr-configuration-options].
-## Limiting the extension to certain nodes
+## Limit the extension to certain nodes
In some configurations, you may only want to run Dapr on certain nodes. You can limit the extension by passing a `nodeSelector` in the extension configuration. If the desired `nodeSelector` contains `.`, you must escape them from the shell and the extension. For example, the following configuration will install Dapr to only nodes with `topology.kubernetes.io/zone: "us-east-1c"`:
az k8s-extension create --cluster-type managedClusters \
--configuration-settings "global.daprControlPlaneOs=linuxΓÇ¥ \ --configuration-settings "global.daprControlPlaneArch=amd64ΓÇ¥ ```+
+## Install Dapr in multiple availability zones while in HA mode
+
+By default, the placement service uses a storage class of type `standard_LRS`. It is recommended to create a `zone redundant storage class` while installing Dapr in HA mode across multiple availability zones. For example, to create a `zrs` type storage class:
+
+```yaml
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: custom-zone-redundant-storage
+provisioner: disk.csi.azure.com
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+volumeBindingMode: WaitForFirstConsumer
+parameters:
+ storageaccounttype: Premium_ZRS
+```
+
+When installing Dapr, use the above storage class:
+
+```azurecli
+az k8s-extension create --cluster-type managedClusters
+--cluster-name XXX
+--resource-group XXX
+--name XXX
+--extension-type Microsoft.Dapr
+--auto-upgrade-minor-version XXX
+--version XXX
+--configuration-settings "dapr_placement.volumeclaims.storageClassName=custom-zone-redundant-storage"
+```
+ ## Configure the Dapr release namespace You can configure the release namespace. The Dapr extension gets installed in the `dapr-system` namespace by default. To override it, use `--release-namespace`. Include the cluster `--scope` to redefine the namespace.
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
$blobName="ContosoBackup.apimbackup;
```powershell $storageKey = (Get-AzStorageAccountKey -ResourceGroupName $storageResourceGroup -StorageAccountName $storageAccountName)[0].Value
-$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey$st
+$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey
Restore-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName ` -StorageContext $storageContext -SourceContainerName $containerName -SourceBlobName $blobName
Check out the following related resources for the backup/restore process:
[api-management-arm-token]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-arm-token.png [api-management-endpoint]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-endpoint.png [control-plane-ip-address]: virtual-network-reference.md#control-plane-ip-addresses
-[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range
+[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
To follow the steps described in this article, you must have:
## Scenario
-In this article, you learn how to use a single API Management instance for internal and external consumers and make it act as a single front end for both on-premises and cloud APIs. You'll create an API Management instance of the newer single-tenant version 2 (stv2) type. You'll also understand how to expose only a subset of your APIs for external consumption by using routing functionality available in Application Gateway. In the example, the APIs are highlighted in green.
+In this article, you learn how to use a single API Management instance for internal and external consumers and make it act as a single front end for both on-premises and cloud APIs. You create an API Management instance of the newer single-tenant version 2 (stv2) type. You also understand how to expose only a subset of your APIs for external consumption by using routing functionality available in Application Gateway. In the example, the APIs are highlighted in green.
In the first setup example, all your APIs are managed only from within your virtual network. Internal consumers can access all your internal and external APIs. Traffic never goes out to the internet. High-performance connectivity can be delivered via Azure ExpressRoute circuits. In the example, the internal consumers are highlighted in orange.
In the first setup example, all your APIs are managed only from within your virt
* **Listener**: The listener has a front-end port, a protocol (Http or Https, these values are case sensitive), and the TLS/SSL certificate name (if configuring TLS offload). * **Rule**: The rule binds a listener to a back-end server pool. * **Custom health probe**: Application Gateway, by default, uses IP address-based probes to figure out which servers in `BackendAddressPool` are active. API Management only responds to requests with the correct host header, so the default probes fail. You define a custom health probe to help the application gateway determine that the service is alive and should forward requests.
-* **Custom domain certificates**: To access API Management from the internet, create DNS records to map its host names to the Application Gateway front-end IP address. This mapping ensures that the host name header and certificate sent to Application Gateway and forwarded to API Management are ones that API Management recognizes as valid. In this example, we'll use three certificates. They're for API Management's gateway (the back end), the developer portal, and the management endpoint.
+* **Custom domain certificates**: To access API Management from the internet, create DNS records to map its host names to the Application Gateway front-end IP address. This mapping ensures that the Host header and certificate sent to API Management are valid. In this example, we use three certificates. They're for API Management's gateway (the back end), the developer portal, and the management endpoint.
### Expose the developer portal and management endpoint externally through Application Gateway
All configuration items must be set up before you create the application gateway
1. Upload the trusted root certificate to be configured on the HTTP settings. ```powershell
- $trustedRootCert = New-AzApplicationGatewayTrustedRootCertificate -Name "whitelistcert1" -CertificateFile $trustedRootCertCerPath
+ $trustedRootCert = New-AzApplicationGatewayTrustedRootCertificate -Name "allowlistcert1" -CertificateFile $trustedRootCertCerPath
``` 1. Configure HTTP back-end settings for the application gateway, including a timeout limit for back-end requests, after which they're canceled. This value is different from the probe timeout.
api-management Use Oauth2 For Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/use-oauth2-for-authorization.md
This article shows an Azure API management policy sample that demonstrates how to use OAuth2 for authorization between the gateway and a backend. It shows how to obtain an access token from Azure Active Directory and forward it to the backend. * For a more detailed example policy that not only acquires an access token, but also caches and renews it upon expiration, see [this blog](https://techcommunity.microsoft.com/t5/azure-paas-blog/api-management-policy-for-access-token-acquisition-caching-and/ba-p/2191623).
-* API Management [authorizations](../authorizations-overview.md) (preview) can also be used to simplify the process of managing authorization tokens to OAuth 2.0 backend services.
+* API Management [authorizations](../authorizations-overview.md) can also be used to simplify the process of managing authorization tokens to OAuth 2.0 backend services.
To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md).
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 04/02/2023 Last updated : 06/08/2023 # Compare Azure Government and global Azure
For more information, see [Connect Operations Manager to Azure Monitor](../azure
Application Insights (part of Azure Monitor) enables the same features in both Azure and Azure Government. This section describes the supplemental configuration that is required to use Application Insights in Azure Government.
-**Visual Studio** - In Azure Government, you can enable monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based applications running on Azure App Service. For more information, see [Application monitoring for Azure App Service overview](../azure-monitor/app/azure-web-apps.md). In Visual Studio, go to Tools|Options|Accounts|Registered Azure Clouds|Add New Azure Cloud and select Azure US Government as the Discovery endpoint. After that, adding an account in File|Account Settings will prompt you for which cloud you want to add from.
+**Visual Studio** ΓÇô In Azure Government, you can enable monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based applications running on Azure App Service. For more information, see [Application monitoring for Azure App Service overview](../azure-monitor/app/azure-web-apps.md). In Visual Studio, go to Tools|Options|Accounts|Registered Azure Clouds|Add New Azure Cloud and select Azure US Government as the Discovery endpoint. After that, adding an account in File|Account Settings will prompt you for which cloud you want to add from.
-**SDK endpoint modifications** - In order to send data from Application Insights to an Azure Government region, you'll need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications, as described in [Application Insights overriding default endpoints](/previous-versions/azure/azure-monitor/app/create-new-resource#override-default-endpoints).
+**SDK endpoint modifications** ΓÇô In order to send data from Application Insights to an Azure Government region, you'll need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications, as described in [Application Insights overriding default endpoints](/previous-versions/azure/azure-monitor/app/create-new-resource#override-default-endpoints).
-**Firewall exceptions** - Application Insights uses several IP addresses. You might need to know these addresses if the app that you're monitoring is hosted behind a firewall. For more information, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) from where you can download Azure Government IP addresses.
+**Firewall exceptions** ΓÇô Application Insights uses several IP addresses. You might need to know these addresses if the app that you're monitoring is hosted behind a firewall. For more information, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) from where you can download Azure Government IP addresses.
>[!NOTE] >Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic except for availability monitoring and webhooks, which require inbound firewall rules.
This section outlines variations and considerations when using Networking servic
For an overview of ExpressRoute, see [What is Azure ExpressRoute?](../expressroute/expressroute-introduction.md). For an overview of how **BGP communities** are used with ExpressRoute in Azure Government, see [BGP community support in National Clouds](../expressroute/expressroute-routing.md#bgp-community-support-in-national-clouds).
+### [Azure Front Door](../frontdoor/index.yml)
+
+Azure Front Door Standard and Premium tiers are available in public preview in Azure Government regions US Gov Arizona and US Gov Texas. During public preview, the following Azure Front Door **features aren't supported** in Azure Government:
+
+- Managed certificate for enabling HTTPS; instead, you need to use your own certificate.
+- [Migration](../frontdoor/tier-migration.md) from classic to Standard/Premium tier.
+- [Managed identity integration](../frontdoor/managed-identity.md) for Azure Front Door Standard/Premium access to Azure Key Vault for your own certificate.
+- [Tier upgrade](../frontdoor/tier-upgrade.md) from Standard to Premium.
+- Web Application Firewall (WAF) policies creation via WAF portal extension; instead, WAF policies can be created via Azure Front Door Standard/Premium portal extension. Updates and deletions to WAF policies and rules are supported on WAF portal extension.
+ ### [Private Link](../private-link/index.yml) - For Private Link services availability, see [Azure Private Link availability](../private-link/availability.md).
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-itar.md
The US Department of Commerce is responsible for enforcing the [Export Administr
The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) aren't exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements wouldn't apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140 validated cryptographic modules and not intentionally stored in a military-embargoed country/region, that is, Country/Region Group D:5 as described in [Supplement No. 1 to Part 740](https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-740?toc=1) of the EAR, or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the *exporter* who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
-Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation.
+Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries/regions or in the Russian Federation.
Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
The US Department of State has export control authority over defense articles, s
DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the US Department of Commerce adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that don't constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country/region proscribed in § 126.1](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M/part-126?toc=1) or the Russian Federation, and 5) not sent from a country/region proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet isn't deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party.
-There's no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation. Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
+There's no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries/regions or in the Russian Federation. Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government.
The [Nuclear Regulatory Commission](https://www.nrc.gov/) (NRC) is responsible f
## OFAC Sanctions Laws
-The [Office of Foreign Assets Control](https://home.treasury.gov/policy-issues/office-of-foreign-assets-control-sanctions-programs-and-information) (OFAC) is responsible for administering and enforcing economic and trade sanctions based on US foreign policy and national security goals against targeted foreign countries, terrorists, international narcotics traffickers, and those entities engaged in activities related to the proliferation of weapons of mass destruction.
+The [Office of Foreign Assets Control](https://home.treasury.gov/policy-issues/office-of-foreign-assets-control-sanctions-programs-and-information) (OFAC) is responsible for administering and enforcing economic and trade sanctions based on US foreign policy and national security goals against targeted foreign countries/regions, terrorists, international narcotics traffickers, and those entities engaged in activities related to the proliferation of weapons of mass destruction.
The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempt by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies, for example, that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
-As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft doesn't control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries. For example, a sanctions target isn't allowed to provision Azure services. OFAC hasn't issued guidance, like the guidance provided by BIS for the EAR that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications, including web sites, deployed on Azure. Microsoft doesn't block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach doesn't fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft isn't responsible for and doesn't have the means to know directly the end users that interact with your applications deployed on Azure.
+As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft doesn't control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries/regions. For example, a sanctions target isn't allowed to provision Azure services. OFAC hasn't issued guidance, like the guidance provided by BIS for the EAR that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications, including web sites, deployed on Azure. Microsoft doesn't block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach doesn't fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft isn't responsible for and doesn't have the means to know directly the end users that interact with your applications deployed on Azure.
OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, and so on. OFAC sanctions aren't intended to prevent a resident of a proscribed country/region from viewing a public web site.
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md
Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-respons
The [CLOUD Act](https://www.congress.gov/bill/115th-congress/house-bill/4943) is a United States law that was enacted in March 2018. For more information, see MicrosoftΓÇÖs [blog post](https://blogs.microsoft.com/on-the-issues/2018/04/03/the-cloud-act-is-an-important-step-forward-but-now-more-steps-need-to-follow/) and the [follow-up blog post](https://blogs.microsoft.com/on-the-issues/2018/09/11/a-call-for-principle-based-international-agreements-to-govern-law-enforcement-access-to-data/) that describes MicrosoftΓÇÖs call for principle-based international agreements governing law enforcement access to data. Key points of interest to government customers procuring Azure services are captured below. - The CLOUD Act enables governments to negotiate new government-to-government agreements that will result in greater transparency and certainty for how information is disclosed to law enforcement agencies across international borders.-- The CLOUD Act isn't a mechanism for greater government surveillance; it's a mechanism toward ensuring that your data is ultimately protected by the laws of your home country/region while continuing to facilitate lawful access to evidence for legitimate criminal investigations. Law enforcement in the US still needs to obtain a warrant demonstrating probable cause of a crime from an independent court before seeking the contents of communications. The CLOUD Act requires similar protections for other countries seeking bilateral agreements.
+- The CLOUD Act isn't a mechanism for greater government surveillance; it's a mechanism toward ensuring that your data is ultimately protected by the laws of your home country/region while continuing to facilitate lawful access to evidence for legitimate criminal investigations. Law enforcement in the US still needs to obtain a warrant demonstrating probable cause of a crime from an independent court before seeking the contents of communications. The CLOUD Act requires similar protections for other countries/regions seeking bilateral agreements.
- While the CLOUD Act creates new rights under new international agreements, it also preserves the common law right of cloud service providers to go to court to challenge search warrants when there's a conflict of laws ΓÇô even without these new treaties in place. - Microsoft retains the legal right to object to a law enforcement order in the United States where the order clearly conflicts with the laws of the country/region where your data is hosted. Microsoft will continue to carefully evaluate every law enforcement request and exercise its rights to protect customers where appropriate. - For legitimate enterprise customers, US law enforcement will, in most instances, now go directly to customers rather than to Microsoft for information requests.
For classified workloads, you can provision key enabling Azure services to secur
- Secret - Top secret
-Similar data classification schemes exist in many countries.
+Similar data classification schemes exist in many countries/regions.
For top secret data, you can deploy Azure Stack Hub, which can operate disconnected from Azure and the Internet. [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. Figure 8 depicts key enabling services that you can provision to accommodate various workloads on Azure.
azure-maps Map Add Snap Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md
Title: Add snap grid to the map | Microsoft Azure Maps description: How to add a snap grid to a map using Azure Maps Web SDK-- Previously updated : 07/20/2021-++ Last updated : 06/08/2023+
The resolution of the snapping grid is in pixels. The grid is square and relativ
Create a snap grid using the `atlas.drawing.SnapGridManager` class and pass in a reference to the map you want to connect the manager to. Set the `showGrid` option to `true` if you want to make the grid visible. To snap a shape to the grid, pass it into the snap grid managers `snapShape` function. If you want to snap an array of positions, pass it into the `snapPositions` function.
-The following example snaps an HTML marker to a grid when it is dragged. Drawing tools are used to snap drawn shapes to the grid when the `drawingcomplete` event fires.
+The [Use a snapping grid] sample snaps an HTML marker to a grid when it's dragged. Drawing tools are used to snap drawn shapes to the grid when the `drawingcomplete` event fires.
-<br/>
+<!--
<iframe height="500" scrolling="no" title="Use a snapping grid" src="https://codepen.io/azuremaps/embed/rNmzvXO?default-tab=js%2Cresult" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href="https://codepen.io/azuremaps/pen/rNmzvXO"> Use a snapping grid</a> by Azure Maps (<a href="https://codepen.io/azuremaps">@azuremaps</a>) on <a href="https://codepen.io">CodePen</a>. </iframe>-
+>
## Snap grid options
-The following example shows the different customization options available for the snap grid manager. The grid line styles can be customized by retrieving the underlying line layer using the snap grid managers `getGridLayer` function.
+The [Snap grid options] sample shows the different customization options available for the snap grid manager. The grid line styles can be customized by retrieving the underlying line layer using the snap grid managers `getGridLayer` function.
-<br/>
+<!--
<iframe height="700" scrolling="no" title="Snap grid options" src="https://codepen.io/azuremaps/embed/RwVZJry?default-tab=result" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href="https://codepen.io/azuremaps/pen/RwVZJry"> Snap grid options</a> by Azure Maps (<a href="https://codepen.io/azuremaps">@azuremaps</a>) on <a href="https://codepen.io">CodePen</a>. </iframe>-
+>
## Next steps
Learn how to use other features of the drawing tools module:
> [React to drawing events](drawing-tools-events.md) > [!div class="nextstepaction"]
-> [Interaction types and keyboard shortcuts](drawing-tools-interactions-keyboard-shortcuts.md)
+> [Interaction types and keyboard shortcuts](drawing-tools-interactions-keyboard-shortcuts.md)
+
+[Use a snapping grid]: https://samples.azuremaps.com/?search=Use%20a%20snapping%20grid&sample=use-a-snapping-grid
+[Snap grid options]: https://samples.azuremaps.com/?search=grid&sample=snap-grid-options
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md
Title: Add a tile layer to a map | Microsoft Azure Maps
+ Title: Add a tile layer to a map
+ description: Learn how to superimpose images on maps. See an example that uses the Azure Maps Web SDK to add a tile layer containing a weather radar overlay to a map.-- Previously updated : 3/25/2021-++ Last updated : 06/08/2023+ - # Add a tile layer to a map This article shows you how to overlay a Tile layer on the map. Tile layers allow you to superimpose images on top of Azure Maps base map tiles. For more information on Azure Maps tiling system, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
-A Tile layer loads in tiles from a server. These images can either be pre-rendered or dynamically rendered. Pre-rendered images are stored like any other image on a server using a naming convention that the tile layer understands. Dynamically rendered images use a service to load the images close to real time. There are three different tile service naming conventions supported by Azure Maps [TileLayer](/javascript/api/azure-maps-control/atlas.layer.tilelayer) class:
+A Tile layer loads in tiles from a server. These images can either be prerendered or dynamically rendered. Prerendered images are stored like any other image on a server using a naming convention that the tile layer understands. Dynamically rendered images use a service to load the images close to real time. There are three different tile service naming conventions supported by Azure Maps [TileLayer](/javascript/api/azure-maps-control/atlas.layer.tilelayer) class:
* X, Y, Zoom notation - X is the column, Y is the row position of the tile in the tile grid, and the Zoom notation a value based on the zoom level. * Quadkey notation - Combines x, y, and zoom information into a single string value. This string value becomes a unique identifier for a single tile.
The tile URL passed into a Tile layer must be an http or an https URL to a TileJ
* `{z}` - Zoom level of the tile. Also needs `{x}` and `{y}`. * `{quadkey}` - Tile quadkey identifier based on the Bing Maps tile system naming convention. * `{bbox-epsg-3857}` - A bounding box string with the format `{west},{south},{east},{north}` in the EPSG 3857 Spatial Reference System.
-* `{subdomain}` - A placeholder for the subdomain values, if specified the `subdomain` will be added.
+* `{subdomain}` - A placeholder for the subdomain values, if specified the `subdomain` is added.
* `{azMapsDomain}` - A placeholder to align the domain and authentication of tile requests with the same values used by the map. ## Add a tile layer
- This sample shows how to create a tile layer that points to a set of tiles. This sample uses the x, y, zoom tiling system. he source of this tile layer is the [OpenSeaMap project](https://openseamap.org/index.php), which contains crowd sourced nautical charts. When viewing radar data, ideally users would clearly see the labels of cities as they navigate the map. This behavior can be implemented by inserting the tile layer below the `labels` layer.
+ This sample shows how to create a tile layer that points to a set of tiles. This sample uses the x, y, zoom tiling system. The source of this tile layer is the [OpenSeaMap project], which contains crowd sourced nautical charts. Ideally users would clearly see the labels of cities as they navigate the map when viewing radar data. This behavior can be implemented by inserting the tile layer below the `labels` layer.
```javascript //Create a tile layer and add it to the map below the label layer.
map.layers.add(new atlas.layer.TileLayer({
}), 'labels'); ```
-Below is the complete running code sample of the above functionality.
+For a fully functional sample that shows how to create a tile layer that points to a set of tiles using the x, y, zoom tiling system, see the [Tile Layer using X, Y, and Z] sample in the [Azure Maps Samples]. The source of the tile layer in this sample is a nautical chart from the [OpenSeaMap project], an OpenStreetMaps project licensed under ODbL.
-<br/>
+<!--
<iframe height='500' scrolling='no' title='Tile Layer using X, Y, and Z' src='//codepen.io/azuremaps/embed/BGEQjG/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/BGEQjG/'>Tile Layer using X, Y, and Z</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
## Add an OGC web-mapping service (WMS)
-A web-mapping service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving images of map data. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` coordinate reference system (CRS). When using a WMS service, set the width and height parameters to the same value that is supported by the service, be sure to set this same value in the `tileSize` option. In the formatted URL, set the `BBOX` parameter of the service with the `{bbox-epsg-3857}` placeholder.
+A web-mapping service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving images of map data. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` coordinate reference system (CRS). When using a WMS service, set the width and height parameters to the value supported by the service, be sure to set this value in the `tileSize` option. In the formatted URL, set the `BBOX` parameter of the service with the `{bbox-epsg-3857}` placeholder.
-The following screenshot shows the above code overlaying a web-mapping service of geological data from the [U.S. Geological Survey (USGS)](https://mrdata.usgs.gov/) on top of a map, below the labels.
+For a fully functional sample that shows how to create a tile layer that points to a Web Mapping Service (WMS), see the [WMS Tile Layer] sample in the [Azure Maps Samples].
-<br/>
+The following screenshot shows the [WMS Tile Layer] sample that overlays a web-mapping service of geological data from the [U.S. Geological Survey (USGS)] on top of the map and below the labels.
+
+<!--
<iframe height="500" scrolling="no" title="WMS Tile Layer" src="https://codepen.io/azuremaps/embed/BapjZqr?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/BapjZqr'>WMS Tile Layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+-->
## Add an OGC web-mapping tile service (WMTS)
-A web-mapping tile service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving tiled based overlays for maps. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` or `GoogleMapsCompatible` coordinate reference system (CRS). When using a WMTS service, set the width and height parameters to the same value that is supported by the service, be sure to set this same value in the `tileSize` option. In the formatted URL, replace the following placeholders accordingly:
+A web-mapping tile service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving tiled based overlays for maps. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` or `GoogleMapsCompatible` coordinate reference system (CRS). When using a WMTS service, set the width and height parameters to the same value supported by the service, be sure to also set this value in the `tileSize` option. In the formatted URL, replace the following placeholders accordingly:
* `{TileMatrix}` => `{z}` * `{TileRow}` => `{y}` * `{TileCol}` => `{x}`
-The following screenshot shows the above code overlaying a web-mapping tile service of imagery from the [U.S. Geological Survey (USGS) National Map](https://viewer.nationalmap.gov/services/) on top of a map, below the roads and labels.
+For a fully functional sample that shows how to create a tile layer that points to a Web Mapping Tile Service (WMTS), see the [WMTS Tile Layer] sample in the [Azure Maps Samples].
+
+The following screenshot shows the [WMTS Tile Layer] sample overlaying a web-mapping tile service of imagery from the [U.S. Geological Survey (USGS) National Map] on top of a map, below roads and labels.
-<br/>
+<!--
<iframe height="500" scrolling="no" title="WMTS tile layer" src="https://codepen.io/azuremaps/embed/BapjZVY?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/BapjZVY'>WMTS tile layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+-->
## Customize a tile layer
-The tile layer class has many styling options. Here is a tool to try them out.
+The tile layer class has many styling options. The [Tile Layer Options] sample is a tool to try them out.
-<br/>
+<!--
<iframe height='700' scrolling='no' title='Tile Layer Options' src='//codepen.io/azuremaps/embed/xQeRWX/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/xQeRWX/'>Tile Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+-->
## Next steps
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Add an image layer](./map-add-image-layer.md)+
+[Azure Maps Samples]: https://samples.azuremaps.com
+[Tile Layer using X, Y, and Z]: https://samples.azuremaps.com/?search=tile%20layer&sample=tile-layer-using-x%2C-y%2C-and-z
+[OpenSeaMap project]: https://openseamap.org/index.php
+[WMS Tile Layer]: https://samples.azuremaps.com/?search=tile%20layer&sample=wms-tile-layer
+[U.S. Geological Survey (USGS)]: https://mrdata.usgs.gov/
+[WMTS Tile Layer]: https://samples.azuremaps.com/?search=tile%20layer&sample=wmts-tile-layer
+[U.S. Geological Survey (USGS) National Map]:https://viewer.nationalmap.gov/services
+[Tile Layer Options]: https://samples.azuremaps.com/?search=tile%20layer&sample=tile-layer-options
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
For more information on how to set up and use the Azure Maps map control in a we
### Localizing the map
-If your audience is spread across multiple countries or speak different languages, localization is important.
+If your audience is spread across multiple countries/regions or speak different languages, localization is important.
**Before: Bing Maps**
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Another option for geocoding a large number addresses with Azure Maps is to make
### Get administrative boundary data
-In Bing Maps, administrative boundaries for countries, states, counties, cities, and postal codes are made available via the Geodata API. This API takes in either a coordinate or query to geocode. If a query is passed in, it's geocoded and the coordinates from the first result is used. This API takes the coordinates and retrieves the boundary of the specified entity type that intersects the coordinate. This API didn't necessarily return the boundary for the query that was passed in. If a query for `"Seattle, WA"` is passed in, but the entity type value is set to country/region, the boundary for the USA would be returned.
+In Bing Maps, administrative boundaries for countries/regions, states, counties, cities, and postal codes are made available via the Geodata API. This API takes in either a coordinate or query to geocode. If a query is passed in, it's geocoded and the coordinates from the first result is used. This API takes the coordinates and retrieves the boundary of the specified entity type that intersects the coordinate. This API didn't necessarily return the boundary for the query that was passed in. If a query for `"Seattle, WA"` is passed in, but the entity type value is set to country/region, the boundary for the USA would be returned.
-Azure Maps also provides access to administrative boundaries (countries, states, counties, cities, and postal codes). To retrieve a boundary, you must query one of the search APIs for the boundary you want (such as `Seattle, WA`). If the search result has an associated boundary, a geometry ID is provided in the result response. The search polygon API can then be used to retrieve the exact boundaries for one or more geometry IDs. This is a bit different than Bing Maps as Azure Maps returns the boundary for what was searched for, whereas Bing Maps returns a boundary for a specified entity type at a specified coordinate. Additionally, the boundary data returned by Azure Maps is in GeoJSON format.
+Azure Maps also provides access to administrative boundaries (countries/regions, states, counties, cities, and postal codes). To retrieve a boundary, you must query one of the search APIs for the boundary you want (such as `Seattle, WA`). If the search result has an associated boundary, a geometry ID is provided in the result response. The search polygon API can then be used to retrieve the exact boundaries for one or more geometry IDs. This is a bit different than Bing Maps as Azure Maps returns the boundary for what was searched for, whereas Bing Maps returns a boundary for a specified entity type at a specified coordinate. Additionally, the boundary data returned by Azure Maps is in GeoJSON format.
To recap:
azure-maps Power Bi Visual Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-heat-map-layer.md
Heat maps, also known as density maps, are a type of overlay on a map used to re
A heat map is useful when users want to visualize vast comparative data: -- Comparing customer satisfaction rates or shop performance among regions or countries.
+- Comparing customer satisfaction rates or shop performance among countries/regions.
- Measuring the frequency which customers visit shopping malls in different locations. - Visualizing vast statistical and geographical data sets.
azure-maps Power Bi Visual Geocode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md
When entering multiple values into the **Location** field, you create a geo-hier
| Button | Description | |:-:|-| | 1 | The drill button on the far right, called Drill Mode, allows you to select a map Location and drill down into that specific location one level at a time. For example, if you turn on the drill-down option and select North America, you move down in the hierarchy to the next level--states in North America. For geocoding, Power BI sends Azure Maps country and state data for North America only. The button on the left goes back up one level. |
-| 2 | The double arrow drills to the next level of the hierarchy for all locations at once. For example, if you're currently looking at countries and then use this option to move to the next level, states, Power BI displays state data for all countries. For geocoding, Power BI sends Azure Maps state data (no country/region data) for all locations. This option is useful if each level of your hierarchy is unrelated to the level above it. |
-| 3 | Similar to the drill-down option, except that you don't need to click on the map. It expands down to the next level of the hierarchy remembering the current level's context. For example, if you're currently looking at countries and select this icon, you move down in the hierarchy to the next level--states. For geocoding, Power BI sends data for each state and its corresponding country/region to help Azure Maps geocode more accurately. In most maps, you'll either use this option or the drill-down option on the far right. This will send Azure as much information as possible and result in more accurate location information. |
+| 2 | The double arrow drills to the next level of the hierarchy for all locations at once. For example, if you're currently looking at countries/regions and then use this option to move to the next level, states, Power BI displays state data for all countries/regions. For geocoding, Power BI sends Azure Maps state data (no country/region data) for all locations. This option is useful if each level of your hierarchy is unrelated to the level above it. |
+| 3 | Similar to the drill-down option, except that you don't need to click on the map. It expands down to the next level of the hierarchy remembering the current level's context. For example, if you're currently looking at countries/regions and select this icon, you move down in the hierarchy to the next level--states. For geocoding, Power BI sends data for each state and its corresponding country/region to help Azure Maps geocode more accurately. In most maps, you'll either use this option or the drill-down option on the far right. This will send Azure as much information as possible and result in more accurate location information. |
## Categorize geographic fields in Power BI
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (preview)
-### [3.0.0-preview.7] (May 2nd, 2023)
+### [3.0.0-preview.8] (June 2, 2023)
+
+#### Bug fixes (3.0.0-preview.8)
+
+- Fixed an exception that occurred while updating the property of a layout that no longer exists.
+
+- Fixed an issue where BubbleLayer's accessible indicators didn't update when the data source was modified.
+
+- Fixed an error in subsequent `map.setStyle()` calls if the raw Maplibre style is retrieved in the `stylechanged` event callback on style serialization.
+
+#### Other changes (3.0.0-preview.8)
+
+- Updated attribution logo and link.
+
+#### Installation (3.0.0-preview.8)
+
+The preview is available on [npm][3.0.0-preview.8] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.8][3.0.0-preview.8]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.8/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.8/atlas.min.js"></script>
+ ```
+
+### [3.0.0-preview.7] (May 2, 2023)
#### New features (3.0.0-preview.7)
This document contains information about new features and other changes to the M
#### Bug fixes (3.0.0-preview.7) -- Fixed token expired exception on relaunches when using AAD / shared token / anonymous authentication by making sure authentication is resolved prior to any style definition request
+- Fixed token expired exception on relaunches when using Azure AD / shared token / anonymous authentication by making sure authentication is resolved prior to any style definition request
- Fixed redundant style definition and thumbnail requests
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2 (latest)
-### [2.2.7] (May 2nd, 2023)
+### [2.3.0] (June 2, 2023)
+
+#### New features (2.3.0)
+
+- **\[BREAKING\]** Refactored the internal StyleManager to replace `_stylePatch` with `transformStyle`. This change will allow road shield icons to update and render properly after a style switch.
+
+#### Bug fixes (2.3.0)
+
+- Fixed an exception that occurred while updating the property of a layout that that no longer exists.
+
+- Fixed an issue where BubbleLayer's accessible indicators didn't update when the data source was modified.
+
+#### Other changes (2.3.0)
+
+- Updated attribution logo and link.
+
+### [2.2.7] (May 2, 2023)
#### New features (2.2.7)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
#### Bug fixes (2.2.7) -- Fixed token expired exception on relaunches when using AAD / shared token / anonymous authentication by making sure authentication is resolved prior to any style definition request
+- Fixed token expired exception on relaunches when using Azure AD / shared token / anonymous authentication by making sure authentication is resolved prior to any style definition request
- Fixed redundant style definition and thumbnail requests
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0-preview.8]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.8
[3.0.0-preview.7]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.7 [3.0.0-preview.6]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.6 [3.0.0-preview.5]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.5
Stay up to date on Azure Maps:
[3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.3.0]: https://www.npmjs.com/package/azure-maps-control/v/2.3.0
[2.2.7]: https://www.npmjs.com/package/azure-maps-control/v/2.2.7 [2.2.6]: https://www.npmjs.com/package/azure-maps-control/v/2.2.6 [2.2.5]: https://www.npmjs.com/package/azure-maps-control/v/2.2.5
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
Title: Render coverage
-description: Render coverage tables list the countries that support Azure Maps road tiles.
+description: Render coverage tables list the countries/regions that support Azure Maps road tiles.
Last updated 03/23/2022
# Azure Maps render coverage
-The render coverage tables below list the countries that support Azure Maps road tiles. Both raster and vector tiles are supported. At the lowest resolution, the entire world fits in a single tile. At the highest resolution, a single tile represents 38 square meters. You'll see more details about continents, regions, cities, and individual streets as you zoom in the map. For more information about tiles, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
+The render coverage tables below list the countries/regions that support Azure Maps road tiles. Both raster and vector tiles are supported. At the lowest resolution, the entire world fits in a single tile. At the highest resolution, a single tile represents 38 square meters. You'll see more details about continents, regions, cities, and individual streets as you zoom in the map. For more information about tiles, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
### Legend
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md
Azure Maps have been localized in variety languages across its services. The fol
> * Morocco > * Pakistan >
-> After August 1, 2019, the **View** parameter will define the returned map content for the new regions/countries listed above. Azure Maps **View** parameter (also referred to as "user region parameter") is a two letter ISO-3166 Country Code that will show the correct maps for that country/region specifying which set of geopolitically disputed content is returned via Azure Maps services, including borders and labels displayed on the map.
+> After August 1, 2019, the **View** parameter will define the returned map content for the new countries/regions listed above. Azure Maps **View** parameter (also referred to as "user region parameter") is a two letter ISO-3166 Country Code that will show the correct maps for that country/region specifying which set of geopolitically disputed content is returned via Azure Maps services, including borders and labels displayed on the map.
Make sure you set up the **View** parameter as required for the REST APIs and the SDKs, which your services are using.
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
These replies are supported for SMS notifications. The recipient of the SMS can
>If a user has unsubscribed from SMS alerts, but is then added to a new action group; they WILL receive SMS alerts for that new action group, but remain unsubscribed from all previous action groups. You might have a limited number of Azure app actions per action group.
-### Countries with SMS notification support
+### Countries/Regions with SMS notification support
| Country code | Country | |:|:|
You might have a limited number of voice actions per action group.
> > If you can't select your country/region code in the Azure portal, voice calls aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party voice call provider that offers support in your country/region.
-### Countries with Voice notification support
+### Countries/Regions with Voice notification support
| Country code | Country | |:|:| | 61 | Australia |
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
var loggerFactory = LoggerFactory.Create(builder =>
}); ```
+> [!NOTE]
+> For more information, see the [getting-started tutorial for OpenTelemetry .NET](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main#getting-started)
+ ##### [Java](#tab/java) Java autoinstrumentation is enabled through configuration changes; no code changes are required.
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
Continue to use management packs for functionality that can't be provided by oth
> [!NOTE] > If you enable VM Insights with the Log Analytics agent instead of the Azure Monitor agent, then no additional agent needs to be installed on the VM. Azure Monitor agent is recommended though because of its significant improvements in monitoring the VM in the cloud. The complexity from maintaining multiple agents is offset by the ability to define monitoring in data collection rules which allow you to configure different data collection for different sets of VMs, similar to your strategy for designing management packs.
-### Migrate management pack logic for VM workloads
+## Migrate management pack logic for VM workloads
There are no migration tools to convert SCOM management packs to Azure Monitor because their logic is fundamentally different than Azure Monitor data collection. Migrating management pack logic will typically focus on analyzing the data collected by SCOM and identifying those monitoring scenarios that can be replicated by Azure Monitor. As you customize Azure Monitor to meet your requirements for different applications and components, then you can start to retire different management packs and legacy agents in SCOM. Management packs in SCOM contain rules and monitors that combine collection of data and the resulting alert into a single end-to-end workflow. Data already collected by SCOM is rarely used for alerting. Azure Monitor separates data collection and alerts into separate processes. Alert rules access data from Azure Monitor Logs and Azure Monitor Metrics that has already been collected from agents. Also, rules and monitors are typically narrowly focused on specific data such as a particular event or performance counter. Data collection rules in Azure Monitor are typically more broad collecting multiple sets of events and performance counters in a single DCR.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
This article describes [Cost optimization](/azure/architecture/framework/cost/)
## Virtual machines
-### Design checklist
--
-> [!div class="checklist"]
-> - Configure VM agents to collect only important events.
-> - Ensure that VMs aren't sending data to multiple workspaces.
-> - Use transformations to filter unnecessary data from collected events.
-
-### Configuration recommendations
-
-| Recommendation | Benefit |
-|:|:|
-| Configure VM agents to collect only important events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-data-collection.md#control-costs) for guidance on data to collect and strategies for using [XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to limit it.|
-| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. |
-| Use transformations to filter unnecessary data from collected events. | [Transformations](essentials/data-collection-transformations.md) can be used in data collection rules to remove unnecessary data or even entire columns from events collected from the virtual machine which can significantly reduce the cost for their ingestion and retention. |
## Container insights
This article describes [Cost optimization](/azure/architecture/framework/cost/)
| Configure Basic Logs. | [Convert your schema to ContainerLogV2](containers/container-insights-logging-v2.md) which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). | ++ ## Application Insights ### Design checklist
azure-monitor Best Practices Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-vm.md
+
+ Title: Best practices for monitoring virtual machines in Azure Monitor
+description: Best practices organized by the pillars of the Well-Architected Framework (WAF) for monitoring virtual machines in Azure Monitor.
+++ Last updated : 06/08/2023+++
+# Best practices for monitoring virtual machines in Azure Monitor
+This article provides architectural best practices for monitoring virtual machines and their client workloads using Azure Monitor. The guidance is based on the five pillars of architecture excellence described in [Azure Well-Architected Framework](/azure/architecture/framework/).
+++
+## Reliability
+In the cloud, we acknowledge that failures happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component. Use the following information to monitor your virtual machines and their client workloads for failure.
+++
+## Security
+Security is one of the most important aspects of any architecture. Azure Monitor provides features to employ both the principle of least privilege and defense-in-depth. Use the following information to monitor the security of your virtual machines.
+++
+## Cost optimization
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+
+> [!NOTE]
+> See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
+++
+## Operational excellence
+Operational excellence refers to operations processes required keep a service running reliably in production. Use the following information to minimize the operational requirements for monitoring of your virtual machines.
+++
+## Performance efficiency
+Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. Use the following information to monitor the performance of your virtual machines.
++
+## Next step
+
+- [Get complete guidance on configuring monitoring for virtual machines](vm/monitor-virtual-machine.md).
azure-monitor Monitor Virtual Machine Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md
Because your Azure Monitor cost is dependent on how much data you collect, ensur
[!INCLUDE [azure-monitor-cost-optimization](../../../includes/azure-monitor-cost-optimization.md)]
-A typical virtual machine generates between 1 GB and 3 GB of data per month. This data size depends on the configuration of the machine, the workloads running on it, and the configuration of your DCRs. Before you configure data collection across your entire virtual machine environment, begin collection on some representative machines to better predict your expected costs when deployed across your environment. Use log queries in [Data volume by computer](../logs/analyze-usage.md#data-volume-by-computer) to determine the amount of billable data collected for each machine and adjust accordingly.
+A typical virtual machine generates between 1 GB and 3 GB of data per month. This data size depends on the configuration of the machine, the workloads running on it, and the configuration of your DCRs. Before you configure data collection across your entire virtual machine environment, begin collection on some representative machines to better predict your expected costs when deployed across your environment. Use [Log Analytics workspace insights](../logs/log-analytics-workspace-insights-overview.md) or log queries in [Data volume by computer](../logs/analyze-usage.md#data-volume-by-computer) to determine the amount of billable data collected for each machine and adjust accordingly.
-Each data source that you collect might have a different method for filtering out unwanted data. You can use [transformations](../essentials/data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+Evaluate collected data and filter out any that meets the following criteria to reduce your costs. Each data source that you collect may have a different method for filtering out unwanted data. See the sections below for the details each of the common data sources.
+- Not used for alerting.
+- No known forensic or diagnostic value.
+- Not required by regulators.
+- Not used in any dashboards or workbooks.
++
+You can also use [transformations](../essentials/data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+
+Filter data as much as possible before it's sent to Azure Monitor to avoid a [potential charge for filtering too much data using transformations](../essentials/data-collection-transformations.md#cost-for-transformations). Use [transformations](../essentials/data-collection-transformations.md) for record filtering using complex logic and for filtering columns with data that you don't require.
## Default data collection Azure Monitor automatically performs the following data collection without requiring any other configuration.
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
This guide describes how to use Azure Monitor to monitor the health and performance of virtual machines and their workloads. It includes collection of telemetry critical for monitoring and analysis and visualization of collected data to identify trends. It also shows you how to configure alerting to be proactively notified of critical issues. > [!NOTE]
-> This scenario describes how to implement complete monitoring of your enterprise Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
+> This guide describes how to implement complete monitoring of your enterprise Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
## Types of machines
The articles in this guide provide guidance on configuring VM insights and using
## Security monitoring
-Azure Monitor focuses on operational data like Activity logs, Metrics, and Log Analytics supported sources, including Windows Events (excluding security events), performance counters, logs, and Syslog. Security monitoring in Azure is performed by [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) and [Microsoft Sentinel](../../sentinel/index.yml). Configuration of these services is not included in this guide.
+Azure Monitor focuses on operational data, while security monitoring in Azure is performed by other services such as [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) and [Microsoft Sentinel](../../sentinel/index.yml). Configuration of these services is not included in this guide.
> [!IMPORTANT] > The security services have their own cost independent of Azure Monitor. Before you configure these services, refer to their pricing information to determine your appropriate investment in their usage.
The following table lists the integration points for Azure Monitor with the secu
See [Design a Log Analytics workspace architecture](../logs/workspace-design.md) for guidance on the most effective workspace design for your requirements taking into account all your services that use them.
-| Integration point | Azure Monitor | Microsoft Defender for Cloud | Microsoft Sentinel | Defender for Endpoint |
+| Integration point | Azure Monitor | Microsoft<br>Defender for Cloud | Microsoft<br>Sentinel | Microsoft<br>Defender for Endpoint |
|:|::|::|::|::| | Collects security events | X<sup>1</sup> | X | X | X | | Stores data in Log Analytics workspace | X | X | X | |
azure-monitor Tutorial Monitor Vm Alert Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert-availability.md
Title: Create availability alert rule for Azure virtual machine (preview)
-description: Create an alert rule in Azure Monitor to proactively notify you if a virtual machine is unavailable.
+ Title: Create availability alert rule for multiple Azure virtual machines (preview)
+description: Create a single alert rule in Azure Monitor to proactively notify you if any virtual machine in a subscription or resource group is unavailable.
Previously updated : 12/03/2022 Last updated : 06/07/2023
-# Tutorial: Create availability alert rule for Azure virtual machine (preview)
-One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method for this is to create a metric alert rule in Azure Monitor using the **VM availability** metric which is currently in public preview.
+# Tutorial: Create availability alert rule for multiple Azure virtual machines (preview)
+One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method for this is to create a metric alert rule in Azure Monitor using the [VM availability](../../virtual-machines/monitor-vm-reference.md#vm-availability-metric-preview) metric which is currently in public preview.
+
+You can create an availability alert rule for a single VM using the VM Availability metric with [recommended alerts](tutorial-monitor-vm-alert-recommended.md). This tutorial shows how to create a single rule that will apply to all virtual machines in a subscription or resource group in a particular region.
+
+> [!TIP]
+> While this article uses the metric value *VM availability metric,* you can use the same process to alert on any metric value.
In this article, you learn how to: > [!div class="checklist"]
-> * View the VM availability metric that indicates whether a VM is running.
-> * Create an alert rule using the VM availability metric to notify you if the virtual machine is unavailable.
+> * View the VM availability metric in metrics explorer.
+> * Create an alert rule targeting a subscription or resources group.
> * Create an action group to be proactively notified when an alert is created.
-> [!NOTE]
-> You can now create an availability alert rule using the VM Availability metrics with [recommended alerts](tutorial-monitor-vm-alert-recommended.md).
- ## Prerequisites To complete the steps in this article you need the following: -- An Azure virtual machine to monitor.
+- At least one Azure virtual machine to monitor.
-## View the VM availability metric
-Start by viewing the VM availability metric for your VM. Open the **Overview** page for the VM and then the **Monitoring** tab. This shows trending for several common metrics for the VM. Scroll down to view the chart for VM availability (preview). The value of the metric will be 1 when the VM is running and 0 when it's not.
+## View VM availability metric in metrics explorer
+There are multiple methods to create an alert rule in Azure Monitor. In this tutorial, we'll create it from [metrics explorer](../essentials/metrics-getting-started.md), which will prefill required values such as the scope and metric we want to monitor. You'll just need to provide the detailed logic for the alert rule.
+1. Select **Metrics** from the **Monitor** menu in the Azure portal.
+2. In **Select a scope**, select either a subscription or a resource group with VMs to monitor.
+3. Under **Refine scope**, for **Resource type**, select *Virtual machines*, and select the **Location** with VMs to monitor.
+5. Click **Apply** to set the scope for metrics explorer.
-## Create alert rule
-There are multiple methods to create an alert rule in Azure Monitor. In this tutorial, we'll create it right from the metric value. This will prefill required values such as the VM and metric we want to monitor. You'll just need to provide the detailed logic for the alert rule.
+ :::image type="content" source="media/tutorial-monitor-vm/metric-explorer-scope.png" alt-text="Screenshot of metrics explorer scope selection." lightbox="media/tutorial-monitor-vm/metric-explorer-scope.png":::
-> [!TIP]
-> You can create an alert rule for a group of VMs in the same region by changing the scope of the alert rule to a subscription or resource group.
-Click on the **VM availability** chart to open the metric in [metrics explorer](../essentials/metrics-getting-started.md). This is a tool in Azure Monitor that allows you to interactively analyze metrics collected from your Azure resources. Click **New alert rule**. This starts the creation of a new alert rule using the VM availability metric and the current VM.
+6. Select *VM Availability metric (preview)* for **Metric**. The value is displayed for each VM in the selected scope.
+
+ :::image type="content" source="media/tutorial-monitor-vm/vm-availability-metric-explorer.png" alt-text="Screenshot of VM Availability metric in metrics explorer." lightbox="media/tutorial-monitor-vm/vm-availability-metric-explorer.png":::
+7. Click **New Alert Rule** to create an alert rule and open its configuration.
-Set the following values for the **Alert logic**. This specifies that the alert will fire whenever the average value of the availability metric falls below 1, which indicates that the VM isn't running.
+8. Set the following values for the **Alert logic**. This specifies that the alert will fire whenever the average value of the availability metric falls below 1, which indicates that one of the VMs in the selected scope isn't running.
-| Setting | Value |
-|:|:|
-| Threshold | Static |
-| Aggregation Type | Average |
-| Operator | Less than |
-| Unit | Count |
-| Threshold value | 1 |
+ | Setting | Value |
+ |:|:|
+ | Threshold | Static |
+ | Aggregation Type | Average |
+ | Operator | Less than |
+ | Unit | Count |
+ | Threshold value | 1 |
-Set the following values for **When to evaluate**. This specifies that the rule will run every minute, using the collected values from the previous minute.
+9. Set the following values for **When to evaluate**. This specifies that the rule will run every minute, using the collected values from the previous minute.
+ | Setting | Value |
+ |:|:|
+ | Check every | 1 minute |
+ | Loopback period | 1 minute |
-| Setting | Value |
-|:|:|
-| Check every | 1 minute |
-| Loopback period | 1 minute |
+ :::image type="content" source="media/tutorial-monitor-vm/vm-availability-metric-alert-logic.png" alt-text="Screenshot of alert rule details for VM Availability metric." lightbox="media/tutorial-monitor-vm/vm-availability-metric-alert-logic.png":::
The **Actions** page allows you to add one or more [action groups](../alerts/act
> [!TIP] > If you already have an action group, click **Add action group** to add an existing group to the alert rule instead of creating a new one.
-Click **Create action group** to create a new one.
+1. Click **Create action group** to create a new one.
+ :::image type="content" source="media/tutorial-monitor-vm/vm-availability-metric-create-action-group.png" alt-text="Screenshot of option to create new action group." lightbox="media/tutorial-monitor-vm/vm-availability-metric-create-action-group.png":::
-Select a **Subscription** and **Resource group** for the action group and give it an **Action group name** that will appear in the portal and a **Display name** that will appear in email and SMS notifications.
+2. Select a **Subscription** and **Resource group** for the action group and give it an **Action group name** that will appear in the portal and a **Display name** that will appear in email and SMS notifications.
+ :::image type="content" source="media/tutorial-monitor-vm/vm-availability-metric-action-group-basics.png" lightbox="./media/tutorial-monitor-vm/vm-availability-metric-action-group-basics.png" alt-text="Screenshot of action group basics.":::
-Select **Notifications** and add one or more methods to notify appropriate people when the alert is fired.
+3. Select **Notifications** and add one or more methods to notify appropriate people when the alert is fired.
+ :::image type="content" source="media/tutorial-monitor-vm/action-group-notifications.png" lightbox="./media/tutorial-monitor-vm/action-group-notifications.png" alt-text="Screenshot showing action group notifications.":::
## Configure details
-The **Details** page allows you to configure different settings for the alert rule.
-| Setting | Description |
-|:|:|
-| Subscription | Subscription where the alert rule will be stored. |
-| Resource group | Resource group where the alert rule will be stored. This doesn't need to be in the same resource group as the resource that you're monitoring. |
-| Severity | The severity allows you to group alerts with a similar relative importance. A severity of **Error** is appropriate for an unresponsive virtual machine. |
-| Alert rule name | Name of the alert that's displayed when it fires. |
-| Alert rule description | Optional description of the alert rule. |
+1. Configure different settings for the alert rule on the **Details** page.
+ | Setting | Description |
+ |:|:|
+ | Subscription | Subscription where the alert rule will be stored. |
+ | Resource group | Resource group where the alert rule will be stored. This doesn't need to be in the same resource group as the resource that you're monitoring. |
+ | Severity | The severity allows you to group alerts with a similar relative importance. A severity of **Error** is appropriate for an unresponsive virtual machine. |
+ | Alert rule name | Name of the alert that's displayed when it fires. |
+ | Alert rule description | Optional description of the alert rule. |
+
+ :::image type="content" source="media/tutorial-monitor-vm/alert-rule-details.png" lightbox="media/tutorial-monitor-vm/alert-rule-details.png" alt-text="Screenshot showing alert rule details.":::
-Click **Review + create** to create the alert rule.
+2. Click **Review + create** to create the alert rule.
## View the alert
-To test the alert rule, stop the virtual machine. If you configured a notification in your action group, then you should receive that notification within a few seconds. You'll also see an alert indicated in the summary shown in the **Alerts** page for the virtual machine.
+To test the alert rule, stop one or more virtual machines in the scope you specified. If you configured a notification in your action group, then you should receive that notification within a few seconds. You'll also see an alert for each VM on the **Alerts** page.
:::image type="content" source="media/tutorial-monitor-vm/vm-availability-metric-alert.png" lightbox="media/tutorial-monitor-vm/alerts-summary.png" alt-text="Alerts summary"::: ## Next steps
-Now that you have alerting in place when the VM goes down, enable VM insights to install the Azure Monitor agent which collects
+Now that you have alerting in place when the VM goes down, enable VM insights to install the Azure Monitor agent which collects additional data from the client and provides additional analysis tools.
> [!div class="nextstepaction"] > [Collect guest logs and metrics from Azure virtual machine](tutorial-monitor-vm-guest.md)
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
description: Overview of VM insights, which monitors the health and performance
Previously updated : 06/21/2022 Last updated : 06/08/2023 # Overview of VM insights
-VM insights monitors the performance and health of your virtual machines and virtual machine scale sets. It monitors their running processes and dependencies on other resources. VM insights can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues. It can also help you understand whether an issue is related to other dependencies.
-
-> [!NOTE]
-> VM insights now supports [Azure Monitor agent](../agents/azure-monitor-agent-overview.md). For more information, see [Enable VM insights overview](vminsights-enable-overview.md#agents).
+VM insights provides a quick and easy method for getting started monitoring the client workloads on your virtual machines and virtual machine scale sets. It displays an inventory of your existing VMs and provides a guided experience to enable base monitoring for them. It also monitors the performance and health of your virtual machines and virtual machine scale sets by collecting data on their running processes and dependencies on other resources.
VM insights supports Windows and Linux operating systems on:
VM insights supports Windows and Linux operating systems on:
- On-premises virtual machines. - Virtual machines hosted in another cloud environment.
-VM insights stores its data in Azure Monitor Logs, which allows it to deliver powerful aggregation and filtering and to analyze data trends over time. You can view this data in a single VM from the virtual machine directly. Or, you can use Azure Monitor to deliver an aggregated view of multiple VMs.
+VM insights provides a set of predefined workbooks that allow you to view trending of collected performance data over time. You can view this data in a single VM from the virtual machine directly, or you can use Azure Monitor to deliver an aggregated view of multiple VMs.
![Screenshot that shows the VM insights perspective in the Azure portal.](media/vminsights-overview/vminsights-azmon-directvm.png) + ## Pricing There's no direct cost for VM insights, but you're charged for its activity in the Log Analytics workspace. Based on the pricing that's published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/), VM insights is billed for:
Access VM insights for all your virtual machines and virtual machine scale sets
## Limitations
+- VM insights collects a predefined set of metrics from the VM client and doesn't collect any event data. You can use the Azure portal to [create data collection rules](../agents/data-collection-rule-azure-monitor-agent.md) to collect events and additional performance counters using the same Azure Monitor agent used by VM insights.
- VM insights doesn't support sending data to multiple Log Analytics workspaces (multi-homing). ## Next steps
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 05/31/2023 Last updated : 06/09/2023 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [Azure Application Consistent Snapshot tool (AzAcSnap)](azacsnap-introduction.md) * [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620) * [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161)
+* [SAP HANA on Azure NetApp Files - Data protection with BlueXP backup and recovery](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-hana-on-azure-netapp-files-data-protection-with-bluexp/ba-p/3840116)
* [Azure NetApp Files Backup for SAP Solutions](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/anf-backup-for-sap-solutions/ba-p/3717977) * [SAP HANA Disaster Recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_Disaster_Recovery_with_Azure_NetApp_Files.pdf)
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
There are limits, per subscription, for deploying resources using Compute Galler
[!INCLUDE [virtual-machine-scale-sets-limits](../../../includes/azure-virtual-machine-scale-sets-limits.md)]
+## Dev tunnels limits
++ ## See also * [Understand Azure limits and increases](https://azure.microsoft.com/blog/azure-limits-quotas-increase-requests/)
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
Title: Deploy vSAN stretched clusters (Preview)
+ Title: Deploy vSAN stretched clusters
description: Learn how to deploy vSAN stretched clusters. Previously updated : 09/02/2022 Last updated : 06/12/2023
-# Deploy vSAN stretched clusters (Preview)
+# Deploy vSAN stretched clusters
-In this article, you'll learn how to implement a vSAN stretched cluster for an Azure VMware Solution private cloud.
+In this article, learn how to implement a vSAN stretched cluster for an Azure VMware Solution private cloud.
## Background
It's important to understand that stretched cluster private clouds only offer an
It should be noted that these types of failures, although rare, fall outside the scope of the protection offered by a stretched cluster private cloud. Because of those types of rare failures, a stretched cluster solution should be regarded as a multi-AZ high availability solution reliant upon vSphere HA. It's important you understand that a stretched cluster solution isn't meant to replace a comprehensive multi-region Disaster Recovery strategy that can be employed to ensure application availability. The reason is because a Disaster Recovery solution typically has separate management and control planes in separate Azure regions. Azure VMware Solution stretched clusters have a single management and control plane stretched across two availability zones within the same Azure region. For example, one vCenter Server, one NSX-T Manager cluster, one NSX-T Data Center Edge VM pair.
-## Deploy a stretched cluster private cloud
+## Stretched clusters region availability
+
+Azure VMware Solution stretched clusters are available in the following regions:
-Currently, Azure VMware Solution stretched clusters is in the (preview) phase. While in the (preview) phase, you must contact Microsoft to request and qualify for support.
+- UK South (on AV36)
+- West Europe (on AV36)
+- Germany West Central (on AV36)
+- Australia East (on AV36P)
## Prerequisites
-To request support, send an email request to **avsStretchedCluster@microsoft.com** with the following details:
+Follow the [Request Host Quota](/azure/azure-vmware/request-host-quota-azure-vmware-solution) process to get the quota reserved for the required number of nodes. Provide the following details to facilitate the process:
- Company name-- Point of contact (email)-- Subscription (a new, separate subscription is required)-- Region requested (West Europe, UK South, Germany West Central)-- Number of nodes in first stretched cluster (minimum 6, maximum 16 - in multiples of two)-- Estimated provisioning date (used for billing purposes)
+- Point of contact: email
+- Subscription ID: a new, separate subscription is required
+- Type of private cloud: "Stretched Cluster"
+- Region requested: UK South, West Europe, Germany West Central, or Australia East
+- Number of nodes in first stretched cluster: minimum 6, maximum 16 - in multiples of two
+- Estimated expansion plan
+
+## Deploy a stretched cluster private cloud
When the request support details are received, quota will be reserved for a stretched cluster environment in the region requested. The subscription gets enabled to deploy a stretched cluster SDDC through the Azure portal. A confirmation email will be sent to the designated point of contact within two business days upon which you should be able to [self-deploy a stretched cluster private cloud via the Azure portal](./tutorial-create-private-cloud.md?tabs=azure-portal#create-a-private-cloud). Be sure to select **Hosts in two availability zones** to ensure that a stretched cluster gets deployed in the region of your choice.
Next, repeat the process to [peer ExpressRoute Global Reach](./tutorial-expressr
:::image type="content" source="media/stretch-clusters/express-route-global-reach-peer-availability-zones.png" alt-text="Screenshot shows page to peer both availability zones to on-premises Express Route Global Reach."lightbox="media/stretch-clusters/express-route-global-reach-peer-availability-zones.png":::
-## Supported scenarios
-
-The following scenarios are supported:
--- Workload connectivity to internet from both AZs via Customer vWAN or On-premises data center-- Private DNS resolution-- Placement policies (except for VM-AZ affinity)-- Cluster scale out and scale in-- The following SPBM policies are supported, with a PFTT of ΓÇ£Dual Site MirroringΓÇ¥ and SFTT of ΓÇ£RAID 1 (Mirroring)ΓÇ¥ enabled as the default policies for the cluster:
- - Site disaster tolerance settings (PFTT):
- - Dual site mirroring
- - None - keep data on preferred
- - None - keep data on non-preferred
- - Local failures to tolerate (SFTT):
- - 1 failure ΓÇô RAID 1 (Mirroring)
- - 1 failure ΓÇô RAID 5 (Erasure coding), requires a minimum of 4 hosts in each AZ
- - 2 failures ΓÇô RAID 1 (Mirroring)
- - 2 failures ΓÇô RAID 6 (Erasure coding), requires a minimum of 6 hosts in each AZ
- - 3 failures ΓÇô RAID 1 (Mirroring)
+## Storage policies supported
-In this phase, while the creation of the private cloud and the first stretched cluster is enabled via the Azure portal, open a [support ticket](https://rc.portal.azure.com/#create/Microsoft.Support) from the Azure portal for other supported scenarios and configurations listed below. While doing so, make sure you select **Stretched Clusters** as a Problem Type.
+The following SPBM policies are supported with a PFTT of "Dual Site Mirroring" and SFTT of "RAID 1 (Mirroring)" enabled as the default policies for the cluster:
-Once stretched clusters are made generally available, it's expected that all the following supported scenarios will be enabled in an automated self-service fashion.
--- HCX installation, deployment, removal, and support for migration-- Connect a private cloud in another region to a stretched cluster private cloud-- Connect two stretched cluster private clouds in a single region-- Configure Active Directory as an identity source for vCenter Server-- A PFTT of ΓÇ£Keep data on preferredΓÇ¥ or ΓÇ£Keep data on non-preferredΓÇ¥ requires keeping VMs on either one of the availability zones. For such VMs, open a support ticket to ensure that those VMs are pinned to an availability zone.-- Cluster addition-- Cluster deletion-- Private cloud deletion-
-## Supported regions
-
-Azure VMware Solution stretched clusters are available in the following regions:
--- UK South-- West Europe-- Germany West Central
+- Site disaster tolerance settings (PFTT):
+ - Dual site mirroring
+ - None - keep data on preferred
+ - None - keep data on non-preferred
+- Local failures to tolerate (SFTT):
+ - 1 failure ΓÇô RAID 1 (Mirroring)
+ - 1 failure ΓÇô RAID 5 (Erasure coding), requires a minimum of 4 hosts in each AZ
+ - 2 failures ΓÇô RAID 1 (Mirroring)
+ - 2 failures ΓÇô RAID 6 (Erasure coding), requires a minimum of 6 hosts in each AZ
+ - 3 failures ΓÇô RAID 1 (Mirroring)
## FAQ ### Are any other regions planned?
-As of now, the only 3 regions listed above are planned for support of stretched clusters.
+Currently, there are [four regions supported](#stretched-clusters-region-availability) for stretched clusters.
-### What kind of SLA does Azure VMware Solution provide with the stretched clusters (preview) release?
+### What kind of SLA does Azure VMware Solution provide with the stretched clusters?
A private cloud created with a vSAN stretched cluster is designed to offer a 99.99% infrastructure availability commitment when the following conditions exist: - A minimum of 6 nodes are deployed in the cluster (3 in each availability zone)
No. A stretched cluster is created between two availability zones, while the thi
- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment. - Customer workload VMs are restarted with a medium vSphere HA priority. Management VMs have the highest restart priority. - The solution relies on vSphere HA and vSAN for restarts and replication. Recovery time objective (RTO) is determined by the amount of time it takes vSphere HA to restart a VM on the surviving AZ after the failure of a single AZ.-- Preview and recent GA features for standard private cloud environments aren't supported in a stretched cluster environment.-- Disaster recovery addons like, VMware SRM, Zerto, and JetStream are currently not supported in a stretched cluster environment.
+- Currently not supported in a stretched cluster environment:
+ - Recently released features like Public IP down to NSX Edge and external storage, like ANF datastores.
+ - Disaster recovery addons like VMware SRM, Zerto, and JetStream.
+- Open a [support ticket](https://rc.portal.azure.com/#create/Microsoft.Support) from the Azure portal for the following scenarios (be sure to select **Stretched Clusters** as a **Problem Type**):
+ - Connect a private cloud to a stretched cluster private cloud.
+ - Connect two stretched cluster private clouds in a single region.
### What kind of latencies should I expect between the availability zones (AZs)?
Customers will be charged based on the number of nodes deployed within the priva
### Will I be charged for the witness node and for inter-AZ traffic?
-No. While in (preview), customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
-
-### Which SKUs are available?
-
-Stretched clusters will solely be supported on the AV36 SKU.
+No. Customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure Bastion so that I can securely connect to my Azure virtual machines. Previously updated : 05/18/2023 Last updated : 06/08/2023
bastion Upgrade Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/upgrade-sku.md
Title: 'Upgrade a SKU'
+ Title: 'Upgrade or view a SKU: portal'
-description: Learn how to change Tiers from the Basic to the Standard SKU.
+description: Learn how to view a SKU and change tiers from the Basic to the Standard SKU.
Previously updated : 05/17/2023 Last updated : 06/08/2023
-# Upgrade a SKU
+# View or upgrade a SKU
-This article helps you upgrade from the Basic Tier (SKU) to Standard. Once you upgrade, you can't revert back to the Basic SKU without deleting and reconfiguring Bastion. Currently, this setting can be configured in the Azure portal only. For more information about features and SKUs, see [Configuration settings](configuration-settings.md).
+This article helps you view and upgrade Azure Bastion from the Basic SKU tier to the Standard SKU tier. Once you upgrade, you can't revert back to the Basic SKU without deleting and reconfiguring Bastion. For more information about features and SKUs, see [Configuration settings](configuration-settings.md).
-## Configuration steps
+
+## View a SKU
+
+To view the SKU for your bastion host, use the following steps.
1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the Azure portal, go to your bastion host.
+1. In the left pane, select **Configuration** to open the Configuration page. In the following example, Bastion is configured to use the **Basic** SKU tier.
+
+ Notice that when you use the Basic SKU, the features you can configure are limited. You can upgrade to a higher SKU using the steps in the next section.
+
+ :::image type="content" source="./media/upgrade-sku/view-sku.png" alt-text="Screenshot of the configuration page with the Basic SKU." lightbox="./media/upgrade-sku/view-sku.png":::
+
+## Upgrade a SKU
+
+Use the following steps to upgrade to the Standard SKU.
+ 1. In the Azure portal, go to your Bastion host. 1. On the **Configuration** page, for **Tier**, select **Standard**.
- :::image type="content" source="./media/upgrade-sku/select-sku.png" alt-text="Screenshot of tier select dropdown with Standard selected." lightbox="./media/upgrade-sku/select-sku.png":::
-
+ :::image type="content" source="./media/upgrade-sku/upgrade-sku.png" alt-text="Screenshot of tier select dropdown with Standard selected." lightbox="./media/upgrade-sku/upgrade-sku.png":::
1. You can add features at the same time you upgrade the SKU. You don't need to upgrade the SKU and then go back to add the features as a separate step. 1. Select **Apply** to apply changes. The bastion host updates. This takes about 10 minutes to complete.
bastion Vm About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-about.md
description: Learn about VM connections and features when connecting using Azure
Previously updated : 05/17/2023 Last updated : 06/08/2023
batch Create Pool Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-extensions.md
Request Body
"deploymentConfiguration": { "virtualMachineConfiguration": { "imageReference": {
- "publisher": "canonical",
- "offer": "ubuntuserver",
- "sku": "20.04-lts",
+ "publisher": "almalinux",
+ "offer": "almalinux",
+ "sku": "9-gen1",
"version": "latest" },
- "nodeAgentSkuId": "batch.node.ubuntu 20.04",
+ "nodeAgentSkuId": "batch.node.el 9",
"extensions": [ { "name": "secretext",
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
Here are some property options that you can use to configure a transcription whe
|`channels`|An array of channel numbers to process. Channels `0` and `1` are transcribed by default. | |`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
-|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information, see [Destination container URL](#destination-container-url).|
+|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.| |`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.| |`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
This article assumes that you have working knowledge of the Command Prompt windo
## Download and install ## Create a resource configuration
spx --% config @region --clear
## Basic usage
+> [!NOTE]
+> When you use the Speech CLI in a container, include the `--host` option. For example, run `spx recognize --host wss://localhost:5000/ --file myaudio.wav` to recognize speech from an audio file in a [speech to text container](speech-container-stt.md).
+ This section shows a few basic SPX commands that are often useful for first-time testing and experimentation. Start by viewing the help that's built into the tool by running the following command: ```console
Additional help commands are listed in the console output. You can enter these c
## Speech to text (speech recognition)
+> [!NOTE]
+> You can't use your computer's microphone when you run the Speech CLI within a Docker container. However, you can read from and save audio files in your local mounted directory.
+ To convert speech to text (speech recognition) by using your system's default microphone, run the following command: ```console
With the Speech CLI, you can also recognize speech from an audio file. Run the f
spx recognize --file /path/to/file.wav ```
-> [!NOTE]
-> If you're using a Docker container, `--microphone` will not work.
->
-> If you're recognizing speech from an audio file in a Docker container, make sure that the audio file is located in the directory that you mounted previously.
- > [!TIP] > If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help recognize```.
If you want to save the output of your translation, use the `--output` flag. In
spx translate --file /some/file/path/input.wav --source en-US --target ru-RU --output file /some/file/path/russian_translation.txt ```
-> [!NOTE]
-> For a list of all supported languages and their corresponding locale codes, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
- > [!TIP] > If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help translate```.
cognitive-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md
Previously updated : 06/06/2023 Last updated : 06/08/2023 keywords:
Azure OpenAI Service includes a content filtering system that works alongside c
In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that may violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#preventing-abuse-and-harmful-content-generation).
-The following sections provide information about the content filtering categories, the filtering severity levels, and API scenarios to be considered in application design and implementation.
+The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
## Content filtering categories
-The content filtering system integrated in Azure OpenAI Service contains neural multi-class classification models aimed at detecting and filtering harmful content; the models cover four categories (hate, sexual, violence, and self-harm) across four severity levels (safe, low, medium, and high). Content detected at the 'safe' severity level is labeled in annotations but isn't subject to filtering.
-
-The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low is not filtered by the content filters.
+The content filtering system integrated in the Azure OpenAI Service contains neural multi-class classification models aimed at detecting and filtering harmful content; the models cover four categories (hate, sexual, violence, and self-harm) across four severity levels (safe, low, medium, and high). Content detected at the 'safe' severity level is labeled in annotations but is not subject to filtering and is not configurable.
### Categories
The default content filtering configuration is set to filter at the medium sever
| Medium | Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. | |High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or non-consensual power exchange or abuse.|
+## Configurability (preview)
+
+The default content filtering configuration is set to filter at the medium severity threshold for all four content harm categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low is not filtered by the content filters. The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below:
+
+| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
+|-|--||--|
+| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
+| Medium, high | Yes | Yes | Default setting. Content detected at severity level low is not filtered, content at medium and high is filtered.|
+| High | If approved<sup>\*</sup>| If approved<sup>\*</sup> | Content detected at severity levels low and medium is not filtered. Only content at severity level high is filtered. Requires approval<sup>\*</sup>.|
+| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
+
+<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning content filters off. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
+
+Content filtering configurations are created within a Resource in Azure AI Studio, and can be associated with Deployments. [Learn more about configurability here](../how-to/content-filters.md).
+
+ :::image type="content" source="../media/content-filters/configuration.png" alt-text="Screenshot of the content filter configuration UI" lightbox="../media/content-filters/configuration.png":::
+ ## Scenario details When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which may result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
openai.api_version = "2023-06-01-preview" # API version required to test out Ann
openai.api_key = os.getenv("AZURE_OPENAI_KEY") try:
- openai.Completion.create(
- prompt="<HARMFUL_PROMPT>",
+ response = openai.Completion.create(
+ prompt="<PROMPT>",
engine="<MODEL_DEPLOYMENT_NAME>", )
+ print(response)
+ except openai.error.InvalidRequestError as e: if e.error.code == "content_filter" and e.error.innererror: content_filter_result = e.error.innererror.content_filter_result
cognitive-services Content Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/content-filters.md
+
+ Title: 'How to use content filters (preview) with Azure OpenAI Service'
+
+description: Learn how to use content filters (preview) with Azure OpenAI Service
+++++ Last updated : 6/5/2023++
+recommendations: false
+keywords:
++
+# How to configure content filters with Azure OpenAI Service
+
+> [!NOTE]
+> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for full content filtering control, including (i) configuring content filters at severity level high only (ii) or turning the content filters off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu).
+
+The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high). The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md).
+
+Content filters can be configured at resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md).
+
+The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below. Content detected at the 'safe' severity level is labeled in annotations but is not subject to filtering and is not configurable.
+
+| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
+|-|--||--|
+| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
+| Medium, high | Yes | Yes | Default setting. Content detected at severity level low is not filtered, content at medium and high is filtered.|
+| High | If approved<sup>\*</sup>| If approved<sup>\*</sup> | Content detected at severity levels low and medium is not filtered. Only content at severity level high is filtered. Requires approval<sup>\*</sup>.|
+| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
+
+<sup>\*</sup> Only approved customers have full content filtering control, including configuring content filters at severity level high only or turning the content filters off. Managed customers only can apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
+
+## Configuring content filters via Azure AI Studio (preview)
+
+The following steps show how to set up a customized content filtering configuration for your resource.
+
+1. Go to Azure AI Studio and navigate to the Content Filters tab (in the bottom left navigation, as designated by the red box below).
+
+ :::image type="content" source="../media/content-filters/studio.png" alt-text="Screenshot of the AI Studio UI with Content Filters highlighted" lightbox="../media/content-filters/studio.png":::
+
+2. Create a new customized content filtering configuration.
+
+ :::image type="content" source="../media/content-filters/create-filter.png" alt-text="Screenshot of the content filtering configuration UI with create selected" lightbox="../media/content-filters/create-filter.png":::
+
+ This leads to the following configuration view, where you can choose a name for the custom content filtering configuration.
+
+ :::image type="content" source="../media/content-filters/filter-view.png" alt-text="Screenshot of the content filtering configuration UI" lightbox="../media/content-filters/filter-view.png":::
+
+3. This is the view of the default content filtering configuration, where content is filtered at medium and high severity levels for all categories. You can modify the content filtering severity level for both prompts and completions separately (configuration for prompts is in the left column and configuration for completions is in the right column, as designated with the blue boxes below) for each of the four content categories (content categories are listed on the left side of the screen, as designated with the green box below). There are three severity levels for each category that are partially or fully configurable: Low, medium, and high (labeled at the top of each column, as designated with the red box below).
+
+ :::image type="content" source="../media/content-filters/severity-level.png" alt-text="Screenshot of the content filtering configuration UI with user prompts and model completions highlighted" lightbox="../media/content-filters/severity-level.png":::
+
+4. If you determine that your application or usage scenario requires stricter filtering for some or all content categories, you can configure the settings, separately for prompts and completions, to filter at more severity levels than the default setting. An example is shown in the image below, where the filtering level for user prompts is set to the strictest configuration for hate and sexual, with low severity content filtered along with content classified as medium and high severity (outlined in the red box below). In the example, the filtering levels for model completions are set at the strictest configuration for all content categories (blue box below). With this modified filtering configuration in place, low, medium, and high severity content will be filtered for the hate and sexual categories in user prompts; medium and high severity content will be filtered for the self-harm and violence categories in user prompts; and low, medium, and high severity content will be filtered for all content categories in model completions.
+
+ :::image type="content" source="../media/content-filters/settings.png" alt-text="Screenshot of the content filtering configuration with low, medium, high, highlighted." lightbox="../media/content-filters/settings.png":::
+
+5. If your use case was approved for modified content filters as outlined above, you will receive full control over content filtering configurations. With full control, you can choose to turn filtering off, or filter only at severity level high, while accepting low and medium severity content. In the image below, filtering for the categories of self-harm and violence is turned off for user prompts (red box below), while default configurations are retained for other categories for user prompts. For model completions, only high severity content is filtered for the category self-harm (blue box below), and filtering is turned off for violence (green box below), while default configurations are retained for other categories.
+
+ :::image type="content" source="../media/content-filters/off.png" alt-text="Screenshot of the content filtering configuration with self harm and violence set to off." lightbox="../media/content-filters/off.png":::
+
+ You can create multiple content filtering configurations as per your requirements.
+
+ :::image type="content" source="../media/content-filters/multiple.png" alt-text="Screenshot of the content filtering configuration with multiple content filters configured." lightbox="../media/content-filters/multiple.png":::
+
+6. Next, to make a custom content filtering configuration operational, assign a configuration to one or more deployments in your resource. To do this, go to the **Deployments** tab and select **Edit deployment** (outlined near the top of the screen in a red box below).
+
+ :::image type="content" source="../media/content-filters/edit-deployment.png" alt-text="Screenshot of the content filtering configuration with edit deployment highlighted." lightbox="../media/content-filters/edit-deployment.png":::
+
+7. Go to advanced options (outlined in the blue box below) select the content filter configuration suitable for that deployment from the **Content Filter** dropdown (outlined near the bottom of the dialog box in the red box below).
+
+ :::image type="content" source="../media/content-filters/advanced.png" alt-text="Screenshot of edit deployment configuration with advanced options selected." lightbox="../media/content-filters/select-filter.png":::
+
+8. Select **Save and close** to apply the selected configuration to the deployment.
+
+ :::image type="content" source="../media/content-filters/select-filter.png" alt-text="Screenshot of edit deployment configuration with content filter selected." lightbox="../media/content-filters/select-filter.png":::
+
+9. You can also edit and delete a content filter configuration if required. To do this, navigate to the content filters tab and select the desired action (options outlined near the top of the screen in the red box below). You can edit/delete only one filtering configuration at a time.
+
+ :::image type="content" source="../media/content-filters/delete.png" alt-text="Screenshot of content filter configuration with edit and delete highlighted." lightbox="../media/content-filters/delete.png":::
+
+ > [!NOTE]
+ > Before deleting a content filtering configuration, you will need to unassign it from any deployment in the Deployments tab.
+
+## Best practices
+
+We recommend informing your content filtering configuration decisions through an iterative identification (for example, red team testing, stress-testing, and analysis) and measurement process to address the potential harms that are relevant for a specific model, application, and deployment scenario. After implementing mitigations such as content filtering, repeat measurement to test effectiveness. Recommendations and best practices for Responsible AI for Azure OpenAI, grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI) can be found in the [Responsible AI Overview for Azure OpenAI](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
+
+## Next steps
+
+- Learn more about Responsible AI practices for Azure OpenAI: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
+- Read more about [content filtering categories and severity levels](../concepts/content-filter.md) with Azure OpenAI Service.
+- Learn more about red teaming from our: [Introduction to red teaming large language models (LLMs) article](../concepts/red-teaming.md).
cognitive-services Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/quota.md
+
+ Title: Manage Azure OpenAI Service quota
+
+description: Learn how to use Azure OpenAI to control your deployments rate limits.
++++++ Last updated : 06/07/2023+++
+# Manage Azure OpenAI Service quota
+
+Quota provides the flexibility to actively manage the allocation of rate limits across the deployments within your subscription. This article walks through the process of managing your Azure OpenAI quota.
+
+## Introduction to quota
+
+Azure OpenAI's quota feature enables assignment of rate limits to your deployments, up-to a global limit called your ΓÇ£quota.ΓÇ¥ Quota is assigned to your subscription on a per-region, per-model basis in units of **Tokens-per-Minute (TPM)**. When you onboard a subscription to Azure OpenAI, you'll receive default quota for most available models. Then, you'll assign TPM to each deployment as it is created, and the available quota for that model will be reduced by that amount. You can continue to create deployments and assign them TPM until you reach your quota limit. Once that happens, you can only create new deployments of that model by reducing the TPM assigned to other deployments of the same model (thus freeing TPM for use), or by requesting and being approved for a model quota increase in the desired region.
+
+> [!NOTE]
+> With a quota of 240,000 TPM for GPT-35-Turbo in East US, a customer can create a single deployment of 240K TPM, 2 deployments of 120K TPM each, or any number of deployments in one or multiple Azure OpenAI resources as long as their TPM adds up to less than 240K total in that region.
+
+When a deployment is created, the assigned TPM will directly map to the tokens-per-minute rate limit enforced on its inferencing requests. A **Requests-Per-Minute (RPM)** rate limit will also be enforced whose value is set proportionally to the TPM assignment using the following ratio:
+
+6 RPM per 1000 TPM.
+
+The flexibility to distribute TPM globally within a subscription and region has allowed Azure OpenAI Service to loosen other restrictions:
+
+- The maximum resources per region are increased to 30.
+- The limit on creating no more than one deployments of the same model in a resource has been removed.
+
+## Assign quota
+
+When you create a model deployment, you have the option to assign Tokens-Per-Minute (TPM) to that deployment. TPM can be modified in increments of 1,000, and will map to the TPM and RPM rate limits enforced on your deployment, as discussed above.
+
+To create a new deployment from within the Azure AI Studio under **Management** select **Deployments** > **Create new deployment**.
+
+The option to set the TPM is under the **Advanced options** drop-down:
++
+Post deployment you can adjust your TPM allocation by selecting **Edit deployment** under **Management** > **Deployments** in Azure AI Studio. You can also modify this selection within the new quota management experience under **Management** > **Quotas**.
+
+> [!IMPORTANT]
+> Quotas and limits are subject to change, for the most up-date-information consult our [quotas and limits article](../quotas-limits.md).
+
+## Model specific settings
+
+Different model deployments, also called model classes have unique max TPM values that you're now able to control. **This represents the maximum amount of TPM that can be allocated to that type of model deployment in a given region.** While each model type represents its own unique model class, the max TPM value is currently only different for certain model classes:
+
+- GPT-4
+- GPT-4-32K
+- GPT-35-Turbo
+- Text-Davinci-003
+
+All other model classes have a common max TPM value.
+
+> [!NOTE]
+> Quota Tokens-Per-Minute (TPM) allocation is not related to the max input token limit of a model. Model input token limits are defined in the [models table](../concepts/models.md) and are not impacted by changes made to TPM.
+
+## View and request quota
+
+For an all up view of your quota allocations across deployments in a given region, select **Management** > **Quota** in Azure AI Studio:
++
+- **Quota Name**: There's one quota value per region for each model type. The quota covers all versions of that model. The quota name can be expanded in the UI to show the deployments that are using the quota.
+- **Deployment**: Model deployments divided by model class.
+- **Usage/Limit**: For the quota name, this shows how much quota is used by deployments and the total quota approved for this subscription and region. This amount of quota used is also represented in the bar graph.
+- **Request Quota**: The icon in this field navigates to a form where requests to increase quota can be submitted.
+
+## Migrating existing deployments
+
+As part of the transition to the new quota system and TPM based allocation, all existing Azure OpenAI model deployments have been automatically migrated to use quota. In cases where the existing TPM/RPM allocation exceeds the default values due to previous custom rate-limit increases, equivalent TPM were assigned to the impacted deployments.
+
+## Understanding rate limits
+
+Assigning TPM to a deployment sets the Tokens-Per-Minute (TPM) and Requests-Per-Minute (RPM) rate limits for the deployment, as described above. TPM rate limits are based on the maximum number of tokens that are estimated to be processed by a request at the time the request is received. It isn't the same as the token count used for billing, which is computed after all processing is completed.
+
+As each request is received, Azure OpenAI computes an estimated max processed-token count that includes the following:
+
+- Prompt text and count
+- The max_tokens parameter setting
+- The best_of parameter setting
+
+As requests come into the deployment endpoint, the estimated max-processed-token count is added to a running token count of all requests that is reset each minute. If at any time during that minute, the TPM rate limit value is reached, then further requests will receive a 429 response code until the counter resets.
+
+RPM rate limits are based on the number of requests received over time. The rate limit expects that requests be evenly distributed over a one-minute period. If this average flow isn't maintained, then requests may receive a 429 response even though the limit isn't met when measured over the course of a minute. To implement this behavior, Azure OpenAI Service evaluates the rate of incoming requests over a small period of time, typically 1 or 10 seconds. If the number of requests received during that time exceeds what would be expected at the set RPM limit, then new requests will receive a 429 response code until the next evaluation period. For example, if Azure OpenAI is monitoring request rate on 1-second intervals, then rate limiting will occur for a 600-RPM deployment if more than 10 requests are received during each 1-second period (600 requests per minute = 10 requests per second).
+
+### Rate limit best practices
+
+To minimize issues related to rate limits, it's a good idea to use the following techniques:
+
+- Set max_tokens and best_of to the minimum values that serve the needs of your scenario. For example, donΓÇÖt set a large max-tokens value if you expect your responses to be small.
+- Use quota management to increase TPM on deployments with high traffic, and to reduce TPM on deployments with limited needs.
+- Implement retry logic in your application.
+- Avoid sharp changes in the workload. Increase the workload gradually.
+- Test different load increase patterns.
+
+## Next steps
+
+- To review quota defaults for Azure OpenAI, consult the [quotas & limits article](../quotas-limits.md)
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
Previously updated : 05/15/2023 Last updated : 06/08/2023
This article contains a quick reference and a detailed description of the quotas
## Quotas and limits reference
-The following sections provide you with a quick guide to the quotas and limits that apply to the Azure OpenAI:
+The following sections provide you with a quick guide to the default quotas and limits that apply to Azure OpenAI:
| Limit Name | Limit Value | |--|--|
-| OpenAI resources per region per Azure subscription | 3 |
-| Request limits per model* | Davinci-models (002 and later): 120 per minute <br> ChatGPT model (preview): 300 per minute <br> GPT-4 models (preview): 18 per minute <br> DALL-E models (preview): 2 concurrent requests <br> All other models: 300 per minute |
-| Token limits per model* | Davinci-models (002 and later): 40,000 per minute <br> ChatGPT model: 120,000 per minute<br> GPT-4 8k model: 10,000 per minute<br> GPT-4 32k model: 32,000 per minute<br> All other models: 120,000 per minute|
-| Max fine-tuned model deployments* | 2 |
-| Ability to deploy same model to multiple deployments | Not allowed |
+| OpenAI resources per region per Azure subscription | 30 |
+| Default quota per model and region (in tokens-per-minute)<sup>1</sup> |Text-Davinci-003: 120 K <br> GPT-4: 20 K <br> GPT-4-32K: 60 K <br> All others: 240 K |
+| Maximum prompt tokens per request | Varies per model. For more information, see [Azure OpenAI Service models](./concepts/models.md)|
+| Max fine-tuned model deployments | 2 |
| Total number of training jobs per resource | 100 | | Max simultaneous running training jobs per resource | 1 | | Max training jobs queued | 20 |
-| Max Files per resource | 50 |
+| Max Files per resource | 30 |
| Total size of all files per resource | 1 GB | | Max training job time (job will fail if exceeded) | 720 hours | | Max training job size (tokens in training file) x (# of epochs) | 2 Billion |
-*The limits are subject to change. We anticipate that you will need higher limits as you move toward production and your solution scales. When you know your solution requirements, please reach out to us by applying for a quota increase here: <https://aka.ms/oai/quotaincrease>
+<sup>1</sup> Default quota limits are subject to change.
+### General best practices to remain within rate limits
-For information on max tokens for different models, consult the [models article](./concepts/models.md#model-summary-table-and-region-availability)
-
-### General best practices to mitigate throttling during autoscaling
-
-To minimize issues related to throttling, it's a good idea to use the following techniques:
+To minimize issues related to rate limits, it's a good idea to use the following techniques:
- Implement retry logic in your application. - Avoid sharp changes in the workload. Increase the workload gradually. - Test different load increase patterns.-- Create another OpenAI service resource in the same or different regions, and distribute the workload among them.-
-The next sections describe specific cases of adjusting quotas.
+- Increase the quota assigned to your deployment. Move quota from another deployment, if necessary.
### How to request increases to the default quotas and limits
-At this time, due to overwhelming demand we cannot accept any new resource or quota increase requests.
-
- 
+Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure AI Studio. Please note that due to overwhelming demand, we are not currently approving new quota increase requests. Your request will be queued until it can be filled at a later time.
-> [!NOTE]
-> Ensure that you thoroughly assess your current resource utilization, approaching its full capacity. Be aware that we will not grant additional resources if efficient usage of existing resources is not observed.
+For other rate limits, please [submit a service request](/azure/cognitive-services/cognitive-services-support-options?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
## Next steps
+Explore how to [manage quota](./how-to/quota.md) for your Azure OpenAI deployments.
Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
This issue is fixed in Azure Communication Services Calling SDK version 1.3.1-be
* iOS Safari version: 15.1
-### MacOS Ventura Safari(v16.3 and below) screen sharing.
-Screen sharing does not work in MacOS Ventura Safari(v16.3 and below). Known issue from Safari and will be fixed in v16.4+
+### Screen sharing in macOS Ventura Safari (v16.3 and below)
+Screen sharing does not work in macOS Ventura Safari(v16.3 and below). Known issue from Safari and will be fixed in v16.4+
### Refreshing a page doesn't immediately remove the user from their call
The environment in which this problem occurs is the following:
The cause of this problem might be that acquiring your own stream from the same device will have a side effect of running into race conditions. Acquiring streams from other devices might lead the user into insufficient USB/IO bandwidth, and the `sourceUnavailableError` rate will skyrocket.
+### Excessive use of certain APIs like mute/unmute will result in throttling on ACS infrastructure
+
+As a result of the mute/unmute API call, ACS infrastructure informs other participants in the call about the state of audio of a local participant who invoked mute/unmute, so that participants in the call know who is muted/unmuted.
+Excessive use of mute/unmute will be blocked in ACS infrastructure. That will happen if the participant (or application on behalf of participant) will attempt to mute/unmute continuously, every second, more than 15 times in a 30-second rolling window.
++ ## Communication Services Call Automation APIs The following are known issues in the Communication Services Call Automation APIs:
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/plan-solution.md
If your phone number is being used by a person (for example, a user of your call
The table below summarizes these phone number types:
-| Phone number type | Example | Country availability | Phone Number Capability |Common use case |
+| Phone number type | Example | Country/Region availability | Phone Number Capability |Common use case |
| -- | | -- | |- | | Local (Geographic) | +1 (local area code) XXX XX XX | US* | Calling (Outbound) | Assigning phone numbers to users in your applications | | Toll-Free | +1 (toll-free area *code*) XXX XX XX | US* | Calling (Outbound), SMS (Inbound/Outbound)| Assigning phone numbers to Interactive Voice Response (IVR) systems/Bots, SMS applications |
The table below summarizes these phone number types:
For most phone numbers, we allow you to configure an "a la carte" set of capabilities. These capabilities can be selected as you lease your telephone numbers within Azure Communication Services.
-The capabilities that are available to you depend on the country that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements. Azure Communication Services offers the following phone number capabilities:
+The capabilities that are available to you depend on the country/region that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country/region due to regulatory requirements. Azure Communication Services offers the following phone number capabilities:
- **One-way outbound SMS** This option allows you to send SMS messages to your users. This can be useful in notification and two-factor authentication scenarios. - **Two-way inbound and outbound SMS** This option allows you to send and receive messages from your users using phone numbers. This can be useful in customer service scenarios.
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/telephony-concept.md
With this option:
This option requires an uninterrupted connection to Azure Communication Services.
-For cloud calling, outbound calls are billed at per-minute rates depending on the target country. See the [current rate list for PSTN calls](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv).
+For cloud calling, outbound calls are billed at per-minute rates depending on the target country/region. See the [current rate list for PSTN calls](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv).
### Azure direct routing
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
zone_pivot_groups: acs-azcli-js-csharp-java-python-logic-apps
# Quickstart: Send an SMS message > [!IMPORTANT]
-> SMS capabilities depend on the phone number you use and the country that you're operating within as determined by your Azure billing address. For more information, visit the [Subscription eligibility](../../concepts/numbers/sub-eligibility-number-capability.md) documentation.
+> SMS capabilities depend on the phone number you use and the country/region that you're operating within as determined by your Azure billing address. For more information, visit the [Subscription eligibility](../../concepts/numbers/sub-eligibility-number-capability.md) documentation.
<br/>
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
Do the following steps in the tenant that contains your Project Synergy applicat
# Assign the relevant Role to the managed identity for the Azure Communications Gateway resource New-AzureADServiceAppRoleAssignment -ObjectId $commGwayObjectId -PrincipalId $commGwayObjectId -ResourceId $projectSynergyObjectId -Id $role }
+
``` ## 5. Provide additional information to your onboarding team
container-instances Container Instances Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-overview.md
Azure Container Instances enables [deployment of container instances into an Azu
Confidential containers on ACI enable you to run containers in a trusted execution environment (TEE) that provides hardware-based confidentiality and integrity protections for your container workloads. Confidential containers on ACI can protect data-in-use and encrypts data being processed in memory. Confidential containers on ACI are supported as a SKU that you can select when deploying your workload. For more information, see [confidential container groups](./container-instances-confidential-overview.md).
+## Spot container deployment
+
+ACI Spot containers allow customers to run interruptible, containerized workloads on unused Azure capacity at significantly discounted prices of up to 70% compared to regular-priority ACI containers. ACI spot containers may be preempted when Azure encounters a shortage of surplus capacity, and they're suitable for workloads without strict availability requirements. Customers are billed for per-second memory and core usage. To utilize ACI Spot containers, you can deploy your workload with a specific property flag indicating that you want to use Spot container groups and take advantage of the discounted pricing model.
+For more information, see [spot container groups](container-instances-spot-containers-overview.md).
+ ## Considerations There are default limits that require quota increases. Not all quota increases may be approved: [Resource availability & quota limits for ACI - Azure Container Instances | Microsoft Learn](./container-instances-resource-and-quota-limits.md)
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
> [!NOTE] > Some regions don't support availability zones (denoted by a 'N/A' in the table), and some regions have availability zones, but ACI doesn't currently leverage the capability (denoted by an 'N' in the table). For more information, see [Azure regions with availability zones][az-region-support].
-| Region | Max CPU | Max memory (GB) | VNET max CPU | VNET max memory (GB) | Storage (GB) | GPU SKUs (preview) | Availability Zone support | Confidential SKU (preview) |
+| Region | Max CPU | Max memory (GB) | VNET max CPU | VNET max memory (GB) | Storage (GB) | GPU SKUs (preview) | Availability Zone support | Confidential SKU (preview) | Spot containers (preview) |
| -- | :: | :: | :-: | :--: | :-: | :-: | :-: | :-: | :-: |
-| Australia East | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
-| Australia Southeast | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Brazil South | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
-| Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Canada East | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Central India | 4 | 16 | 4 | 16 | 50 | V100 | N | N |
-| Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
-| East Asia | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| East US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | Y |
-| East US 2 | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
-| France Central | 4 | 16 | 4 | 16 | 50 | N/A | Y| N |
-| Germany West Central | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
-| Japan East | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
-| Japan West | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Jio India West | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Korea Central | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| North Central US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | N | N |
-| North Europe | 4 | 16 | 4 | 16 | 50 | K80 | Y | Y |
-| Norway East | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Norway West | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| South Africa North | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| South Central US | 4 | 16 | 4 | 16 | 50 | V100 | Y | N |
-| South India | 4 | 16 | 4 | 16 | 50 | K80 | N | N |
-| Southeast Asia | 4 | 16 | 4 | 16 | 50 | P100, V100 | Y | N |
-| Sweden Central | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Sweden South | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Switzerland North | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| Switzerland West | 4 | 16 | N/A | N/A | 50 | N/A | N | N |
-| UAE North | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| UK South | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
-| UK West | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| West Central US| 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| West Europe | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | Y |
-| West India | 4 | 16 | N/A | N/A | 50 | N/A | N | N |
-| West US | 4 | 16 | 4 | 16 | 50 | N/A | N | Y |
-| West US 2 | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | N |
-| West US 3 | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
+| Australia East | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | N |
+| Australia Southeast | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Brazil South | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | N |
+| Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Canada East | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Central India | 4 | 16 | 4 | 16 | 50 | V100 | N | N | N |
+| Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | N |
+| East Asia | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| East US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | Y | N |
+| East US 2 | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | Y |
+| France Central | 4 | 16 | 4 | 16 | 50 | N/A | Y| N | N |
+| Germany West Central | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | N |
+| Japan East | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | N |
+| Japan West | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Jio India West | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Korea Central | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| North Central US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | N | N | N |
+| North Europe | 4 | 16 | 4 | 16 | 50 | K80 | Y | Y | N |
+| Norway East | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Norway West | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| South Africa North | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| South Central US | 4 | 16 | 4 | 16 | 50 | V100 | Y | N | N |
+| South India | 4 | 16 | 4 | 16 | 50 | K80 | N | N | N |
+| Southeast Asia | 4 | 16 | 4 | 16 | 50 | P100, V100 | Y | N | N |
+| Sweden Central | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Sweden South | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Switzerland North | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| Switzerland West | 4 | 16 | N/A | N/A | 50 | N/A | N | N | N |
+| UAE North | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| UK South | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | N |
+| UK West | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| West Central US| 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
+| West Europe | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | Y | Y |
+| West India | 4 | 16 | N/A | N/A | 50 | N/A | N | N | N |
+| West US | 4 | 16 | 4 | 16 | 50 | N/A | N | Y | Y |
+| West US 2 | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | N | N |
+| West US 3 | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
The following maximum resources are available to a container group deployed with [GPU resources](container-instances-gpu.md) (preview).
container-instances Container Instances Spot Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-spot-containers-overview.md
+
+ Title: Azure Container Instances Spot containers
+description: Learn more about Spot container groups
+++++ Last updated : 05/14/2023++
+# Azure Container Instances Spot containers (preview)
+This article introduces the concept of Azure Container Instances (ACI) Spot containers, which allows you to run interruptible workloads in containerized form on unused Azure capacity. By utilizing Spot containers, you can enjoy up to a 70% discount compared to regular-priority ACI containers.
+
+Spot containers offer the best of both worlds by combining the simplicity of ACI with the cost-effectiveness of Spot VMs. This enables customers to easily and affordably scale their containerized interruptible workloads. It's important to note that Spot containers may be preempted at any time, particularly when Azure has limited surplus capacity. Customers are billed based on per-second memory and core usage.
+
+This feature is designed for customers who need to run interruptible workloads with no strict availability requirements. Azure Container Instances Spot Containers support both Linux and Windows containers, providing flexibility for different operating system environments.
+
+This article provides background about the feature, limitations, and resources. To see the availability of Spot containers in Azure regions, see [Resource and region availability](container-instances-region-availability.md).
+
+> [!NOTE]
+> Spot containers with Azure Container Instances is in preview and is not recommended for production scenarios.
+++
+## Azure Container Instances Spot containers overview
+
+### Lift and shift applications
+
+ACI Spot containers are a cost-effective option for running containerized applications or parallelizable offline workloads including image rendering, Genomic processing, Monte Carlo simulations, etc. Customers can lift and shift their containerized Linux or Windows applications without needing to adopt specialized programming models to achieve the benefits of standard ACI containers along with low cost.
+
+## Eviction policies
+
+For Spot containers, customers can't choose eviction types or policies like Spot VMs. If an eviction occurs, the container groups hosting the customer workloads are automatically restarted without requiring any action from the customer.
+
+## Unsupported features
+
+ACI Spot containers preview release includes these limitations such as
+
+* **Public IP Endpoint**: ACI Spot container groups won't be assigned a public IP endpoint. This means that the container groups can't be accessed directly from the internet.
+* **Deployment Behind Virtual Network**: Spot container groups can't be deployed behind a virtual network.
+* **Confidential SKU Support**: ACI Spot containers don't support the Confidential SKU, which means that you can't use the Confidential Computing capabilities provided by Azure.
+* **Availability Zone Pinning**: ACI Spot containers don't support the ability to pin Availability Zones per container group deployment.
+
+## Next steps
+
+* For a deployment example with the Azure portal, see [Deploy a Spot container with Azure Container Instances using the Azure portal](container-instances-tutorial-deploy-spot-containers-portal.md)
+* For a deployment example with the Azure CLI, see [Deploy a Spot container with Azure Container Instances using the Azure CLI](container-instances-tutorial-deploy-spot-containers-cli.md)
container-instances Container Instances Tutorial Deploy Spot Containers Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-spot-containers-cli.md
+
+ Title: Tutorial - Deploy a Spot container group on Azure Container Instances
+description: In this quickstart, you use the Azure CLI to quickly deploy a Spot container on Azure Container Instances
+++++ Last updated : 05/11/2023++
+# Tutorial: Deploy a Spot container with Azure Container Instances using the Azure CLI (Preview)
+
+Spot Containers combine the simplicity of ACI with the low cost of Spot VMs making it easy and affordable for customers to run containerized interruptible workloads at scale. Use Azure Container Instances to run serverless Spot containers. Deploy an application to a Spot container on-demand when you want to run interruptible, containerized workloads on unused Azure capacity at low cost and you don't need a full container orchestration platform like Azure Kubernetes Service.
+
+In this quickstart, you use the Azure CLI to deploy a helloworld container using Spot containers. A few seconds after you execute a single deployment command, you can browse to the container logs:
+
+- This quickstart requires version 2xxx later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
+
+Azure container instances, like all Azure resources, must be deployed into a resource group. Resource groups allow you to organize and manage related Azure resources.
+
+First, create a resource group named *myResourceGroup* in the *westus* location with the following [az group create][az-group-create] command:
+
+```azurecli-interactive
+az group create --name myResourceGroup --location westus
+```
+
+## Create a container
+
+Now that you have a resource group, you can run a Spot container in Azure. To create a Spot container group with the Azure CLI, provide a resource group name, container instance name, container image and new property called 'priority' with value of 'Spot' to the [az container create][az-container-create] command. In this quickstart, you use the public `mcr.microsoft.com/azuredocs/aci-helloworld` image. This image packages a small web app written in Node.js that serves a static HTML page.
+
+You can't expose your spot containers to the internet by specifying one or more ports to open, a DNS name label, or both. In this quickstart, you deploy a container using helloworld image without a DNS name label. It won't be publicly reachable. You can query the container logs to verify container is listening on default port 80.
+
+Execute a command similar to the following to start a container instance.
+
+```azurecli-interactive
+az container create --resource-group acispotdemo --name acispotclitest --image mcr.microsoft.com/azuredocs/aci-helloworld --priority spot
+```
+
+Within a few seconds, you should get a response from the Azure CLI indicating that the deployment has completed. Check its status with the [az container show][az-container-show] command:
+
+```azurecli-interactive
+az container show --resource-group acispotdemo --name acispotclitest --query "{ProvisioningState:provisioningState}" --out table
+```
+
+When you run the command, the container's fully qualified domain name (FQDN) and its provisioning state are displayed.
+
+```output
+ContainerGroupName ProvisioningState
+ -
+acispotclitest Succeeded
+```
+
+If the container's `ProvisioningState` is **Succeeded**, congratulations! You've successfully deployed an application running in a Docker container to Azure.
+
+## Pull the container logs
+
+When you need to troubleshoot a container or the application it runs (or just see its output), start by viewing the container instance's logs.
+
+Pull the container instance logs with the [az container logs][az-container-logs] command:
+
+```azurecli-interactive
+az container logs --resource-group acispotdemo --name acispotclitest
+```
+
+The output displays the logs for the container, and should show the below output
+
+```output
+listening on port 80
+```
+
+## Attach output streams
+
+In addition to viewing the logs, you can attach your local standard out and standard error streams to that of the container.
+
+First, execute the [az container attach][az-container-attach] command to attach your local console to the container's output streams:
+
+```azurecli-interactive
+az container attach --resource-group acispotdemo --name acispotclitest
+```
+
+Once attached, refresh your browser a few times to generate some more output. When you're done, detach your console with `Control+C`. You should see output similar to the following:
+
+```output
+Container 'acispotclitest' is in state 'Running'...
+Start streaming logs:
+listening on port 80
+```
+
+## Clean up resources
+
+When you're done with the container, remove it using the [az container delete][az-container-delete] command:
+
+```azurecli-interactive
+az container delete --resource-group acispotdemo --name acispotclitest
+```
+
+To verify that the container has been deleted, execute the [az container list](/cli/azure/container#az-container-list) command:
+
+```azurecli-interactive
+az container list --resource-group acispotdemo --output table
+```
+
+The **acispotclitest** container shouldn't appear in the command's output. If you have no other containers in the resource group, no output is displayed.
+
+If you're done with the *acispotdemo* resource group and all the resources it contains, delete it with the [az group delete][az-group-delete] command:
+
+```azurecli-interactive
+az group delete --name acispotdemo
+```
+
+## Next steps
+
+In this tutorial, you created a Spot container on Azure Container Instances with a default quota and eviction policy using the Azure CLI.
+
+* [Check out the overview for ACI Spot containers](container-instances-spot-containers-overview.md)
+* [Try out Spot containers with Azure Container Instances using the Azure portal](container-instances-tutorial-deploy-spot-containers-portal.md)
+
+<!-- LINKS - External -->
+[app-github-repo]: https://github.com/Azure-Samples/aci-helloworld.git
+[azure-account]: https://azure.microsoft.com/free/
+[node-js]: https://nodejs.org
+
+<!-- LINKS - Internal -->
+[az-container-attach]: /cli/azure/container#az_container_attach
+[az-container-create]: /cli/azure/container#az_container_create
+[az-container-delete]: /cli/azure/container#az_container_delete
+[az-container-list]: /cli/azure/container#az_container_list
+[az-container-logs]: /cli/azure/container#az_container_logs
+[az-container-show]: /cli/azure/container#az_container_show
+[az-group-create]: /cli/azure/group#az_group_create
+[az-group-delete]: /cli/azure/group#az_group_delete
+[azure-cli-install]: /cli/azure/install-azure-cli
+[container-service]: ../aks/intro-kubernetes.md
container-instances Container Instances Tutorial Deploy Spot Containers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-spot-containers-portal.md
+
+ Title: Tutorial - Deploy a Spot container to Azure Container Instances via Azure portal
+description: In this tutorial, you will deploy a spot container to Azure Container Instances via Azure portal.
+++++ Last updated : 05/11/2023+++
+# Tutorial: Deploy a Spot container with Azure Container Instances using the Azure portal (Preview)
+
+In this tutorial, you will use Azure portal to deploy a spot container to Azure Container Instances with a default quota. After deploying the container, you can browse to the running application.
++
+## Sign in to Azure
+
+Sign in to the Azure portal at https://portal.azure.com
+
+If you don't have an Azure subscription, create a [free account][azure-free-account] before you begin.
+
+## Create a Spot container on Azure Container Instances
+
+1. On the Azure portal homepage, select **Create a resource**.
+
+ ![Screenshot showing how to begin creating a new container instance in the Azure portal, PNG.](media/container-instances-quickstart-portal/quickstart-portal-create-resource.png)
+
+1. Select **Containers** > **Container Instances**.
+
+1. On the **Basics** page, choose a subscription and enter the following values for **Resource group**, **Container name**, **Image source**, and **Container image**. Then to deploy ACI Spot container, opt for Spot discount by selecting **Run with Spot discount** field. This will enforce limitations for this feature in preview release automatically and allow you to deploy only in supported regions.
+
+ * Resource group: **Create new** > `acispotdemo`
+ * Container name: `acispotportaldemo`
+ * Region: One of `West Europe`/`East US2`/`West US`
+ * SKU: `Standard`
+ * Image source: **Quickstart images**
+ * Container image: `mcr.microsoft.com/azuredocs/aci-helloworld:v1` (Linux)
+
+ ![Screenshot of the priority selection of a container group, PNG.](media/container-instances-spot-containers-tutorials/spot-create-portal-ui-basic.png)
+
+ When deploying Spot container on Azure Container Instances you need to select only regions supported in public preview. You can change the restart policy, region, type of container images and compute resources. If you want more than default quota offered, please file a support request.
+
+1. Leave all other settings as their defaults, then select **Review + create**.
+
+1. When the validation completes, you're shown a summary of the container's settings. Select **Create** to submit your container deployment request. When the deployment starts, a notification appears that indicates the deployment is in progress. Another notification is displayed when the container group has been deployed.
+
+1. Open the overview for the container group by navigating to **Resource Groups** > **acispotdemo** > **acispotportaldemo**. Make a note of the **priority** property of the container instance and its **Status**.
+
+1. On the **Overview** page, note the **Status** of the instance.
+
+1. Once its status is *Running*, navigate to the AZ CLI and run the below command to check you are able to listen to container on default port 80.
+
+ ![Screenshot of output from container logs post successful deployment to show helloworld container application running, PNG.](media/container-instances-spot-containers-tutorials/aci-spot-portal-demo-show-container-logs.png)
+
+Congratulations! You have deployed a spot container on Azure Container Instances which is running sample hello world container application.
+
+## Clean up resources
+
+When you're done with the container, select **Overview** for the *helloworld* container instance, then select **Delete**.
+
+## Next steps
+
+In this tutorial, you created a Spot container on Azure Container Instances with a default quota and eviction policy using the Azure portal.
+
+* [ACI Spot containers overview](container-instances-spot-containers-overview.md)
+* [Try out Spot containers with Azure Container Instances using the Azure CLI](container-instances-tutorial-deploy-spot-containers-cli.md)
+
+<!-- LINKS - External -->
+[azure-free-account]: https://azure.microsoft.com/free/
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/materialized-views.md
Previously updated : 06/01/2023 Last updated : 06/09/2023 # Materialized views for Azure Cosmos DB for NoSQL (preview)
Use the Azure CLI to enable the materialized views feature either with a native
# Variable for account name accountName="<account-name>"
+
+ # Variable for Subscription
+ subscriptionId="<subscription-id>"
``` 1. Create a new JSON file named **capabilities.json** with the capabilities manifest.
Use the Azure CLI to enable the materialized views feature either with a native
1. Get the identifier of the account and store it in a shell variable named `$accountId`. ```azurecli
- accountId=$(\
- az cosmosdb show \
- --resource-group $resourceGroupName \
- --name $accountName \
- --query id \
- --output tsv \
- )
+ accountId="/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDB/databaseAccounts/$accountName"
``` 1. Enable the preview materialized views feature for the account using the REST API and [`az rest`](/cli/azure/reference-index#az-rest) with an HTTP `PATCH` verb.
Use the Azure CLI to enable the materialized views feature either with a native
```azurecli az rest \ --method PATCH \
- --uri "https://management.azure.com$accountId?api-version=2022-11-15-preview" \
+ --uri "https://management.azure.com/$accountId?api-version=2022-11-15-preview" \
--body @capabilities.json ```
Create a materialized view builder to automatically transform data and write to
```azurecli az rest \ --method PUT \
- --uri "https://management.azure.com$accountIdservices/materializedViewsBuilder?api-version=2022-11-15-preview" \
+ --uri "https://management.azure.com$accountId/services/materializedViewsBuilder?api-version=2022-11-15-preview" \
--body @builder.json ```
Create a materialized view builder to automatically transform data and write to
```azurecli az rest \ --method GET \
- --uri "https://management.azure.com$accountIdservices/materializedViewsBuilder?api-version=2022-11-15-preview"
+ --uri "https://management.azure.com$accountId/services/materializedViewsBuilder?api-version=2022-11-15-preview"
```
Once your account and Materialized View Builder is set up, you should be able to
}, "materializedViewDefinition": { "sourceCollectionId": "mv-src",
- "definition": "SELECT s.accountId, s.emailAddress, CONCAT(s.name.first, s.name.last) FROM s"
+ "definition": "SELECT s.accountId, s.emailAddress FROM s"
} }, "options": {
Once your account and Materialized View Builder is set up, you should be able to
1. Now, make a REST API call to create the materialized view as defined in the **mv_definition.json** file. Use the Azure CLI to make the REST API call.
- 1. Create a variable for the name of the materialized view.
+ 1. Create a variable for the name of the materialized view and source database name.
```azurecli materializedViewName="mv-target"
+
+ # Variable for database name used in later section
+ databaseName="<database-that-contains-source-collection>"
``` 1. Make a REST API call to create the materialized view.
Once your account and Materialized View Builder is set up, you should be able to
```azurecli az rest \ --method PUT \
- --uri "https://management.azure.com$accountIdsqlDatabases/";\
+ --uri "https://management.azure.com$accountId/sqlDatabases/";\
"$databaseName/containers/$materializedViewName?api-version=2022-11-15-preview" \ --body @definition.json \ --headers content-type=application/json
Once your account and Materialized View Builder is set up, you should be able to
```azurecli az rest \ --method GET \
- --uri "https://management.azure.com$accountIdsqlDatabases/";\
+ --uri "https://management.azure.com$accountId/sqlDatabases/";\
"$databaseName/containers/$materializedViewName?api-version=2022-11-15-preview" \ --headers content-type=application/json \ --query "{mvCreateStatus: properties.Status}"
There are a few limitations with the Cosmos DB NoSQL API Materialized View Featu
- point-in-time restore, hierarchical partitioning, end-to-end encryption isn't supported on source containers, which have materialized views associated with them. - Role-based access control is currently not supported for materialized views. - Cross-tenant customer-managed-key (CMK) encryption isn't supported on materialized views.
+- This feature can't be enabled along with Partition Merge feature or Analytical Store
In addition to the above limitations, consider the following extra limitations:
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-hbase-to-cosmos-db.md
Data security is a shared responsibility of the customer and the database provid
| Protect and isolate sensitive data | For example, if you are using Apache Ranger, you can use Ranger policy to apply the policy to the table. | You can separate personal and other sensitive data into specific containers and read / write, or limit read-only access to specific users. | | Monitoring for attacks | It needs to be implemented using third party products. | By using [audit logging and activity logs](../monitor.md), you can monitor your account for normal and abnormal activity. | | Responding to attacks | It needs to be implemented using third party products. | When you contact Azure support and report a potential attack, a five-step incident response process begins. |
-| Ability to geo-fence data to adhere to data governance restrictions | You need to check the restrictions of each country and implement it yourself. | Guarantees data governance for sovereign regions (Germany, China, US Gov, etc.). |
+| Ability to geo-fence data to adhere to data governance restrictions | You need to check the restrictions of each country/region and implement it yourself. | Guarantees data governance for sovereign regions (Germany, China, US Gov, etc.). |
| Physical protection of servers in protected data centers | It depends on the data center where the system is located. | For a list of the latest certifications, see the global [Azure compliance site](/compliance/regulatory/offering-home?view=o365-worldwide&preserve-view=true). | | Certifications | Depends on the Hadoop distribution. | See [Azure compliance documentation](../compliance.md) |
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 06/06/2023 Last updated : 06/08/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### June 2023
+* General availability: Customer defined database name is now available in [all regions](./resources-regions.md) at [cluster provisioning](./quickstart-create-portal.md) time.
+ * If the database name is not specified, the default `citus` name is used.
* General availability: [Managed PgBouncer settings](./reference-parameters.md#managed-pgbouncer-parameters) are now configurable on all clusters. * Learn more about [connection pooling](./concepts-connection-pool.md). * General availability: Preferred availability zone (AZ) selection is now enabled in [all Azure Cosmos DB for PostgreSQL regions](./resources-regions.md) that support AZs.
might have constrained capabilities. For more information, see
[Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
-* Data encryption at rest using customer managed keys.
+* [Data encryption at rest using customer managed keys](./concepts-customer-managed-keys.md).
## Contact us
-Let us know about your experience using preview features, by emailing [Ask
+Let us know about your experience using preview features or if you have other product feedback, by emailing [Ask
Azure Cosmos DB for PostgreSQL](mailto:AskCosmosDB4Postgres@microsoft.com). (This email address isn't a technical support channel. For technical problems, open a [support
cosmos-db Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-connect-psql.md
Previously updated : 06/05/2023 Last updated : 06/07/2023 # Connect to a cluster with psql - Azure Cosmos DB for PostgreSQL
Your cluster has a default database named `citus`. To connect to the database, y
:::image type="content" source="media/quickstart-connect-psql/get-connection-string.png" alt-text="Screenshot that shows copying the psql connection string.":::
- The **psql** string is of the form `psql "host=c-<cluster>.<uniqueID>.postgres.cosmos.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require"`. Notice that the host name starts with a `c.`, for example `c-mycluster.12345678901234.postgres.cosmos.azure.com`. This prefix indicates the coordinator node of the cluster. The default `dbname` and `username` are `citus` and can't be changed.
+ The **psql** string is of the form `psql "host=c-<cluster>.<uniqueID>.postgres.cosmos.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require"`. Notice that the host name starts with a `c.`, for example `c-mycluster.12345678901234.postgres.cosmos.azure.com`. This prefix indicates the coordinator node of the cluster. The default `dbname` is `citus` and can be changed only at cluster provisioning time. The `user` can be any valid [Postgres role](./howto-create-users.md) on your cluster.
1. Open Azure Cloud Shell by selecting the **Cloud Shell** icon on the top menu bar.
Your cluster has a default database named `citus`. To connect to the database, y
:::image type="content" source="media/quickstart-connect-psql/cloud-shell-run-psql.png" alt-text="Screenshot that shows running psql in the Cloud Shell.":::
- When psql successfully connects to the database, you see a new `citus=>` prompt:
+ When psql successfully connects to the database, you see a new `citus=>` (or the custom name of your database) prompt:
```bash psql (14.2, server 14.5)
cosmos-db Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md
Title: Limits and limitations ΓÇô Azure Cosmos DB for PostgreSQL description: Current limits for clusters--++ Previously updated : 01/25/2023 Last updated : 06/07/2023 # Azure Cosmos DB for PostgreSQL limits and limitations
The connection limits above are for *user* connections (`max_connections` minus
administration and recovery. The limits apply to both worker nodes and the coordinator node. Attempts to
-connect beyond these limits will fail with an error.
+connect beyond these limits fails with an error.
#### Connection pooling
currently **not supported**:
### Database creation The Azure portal provides credentials to connect to exactly one database per
-cluster, the `citus` database. Creating another
-database is currently not allowed, and the CREATE DATABASE command will fail
+cluster. Creating another database is currently not allowed, and the CREATE DATABASE command fails
with an error.
+By default this database is called `citus`. Azure Cosmos DB for PostgreSQL supports custom database names at cluster provisioning time only.
+ ## Next steps * Learn how to [create a cluster in the
cosmos-db Social Media Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/social-media-apps.md
With time, you'll eventually grow in traffic and your resource consumption (meas
:::image type="content" source="./media/social-media-apps/social-media-apps-scaling.png" alt-text="Scaling up and defining a partition key":::
-What happens if things keep getting better? Suppose users from another region, country, or continent notice your platform and start using it. What a great surprise!
+What happens if things keep getting better? Suppose users from another country/region or continent notice your platform and start using it. What a great surprise!
But wait! You soon realize their experience with your platform isn't optimal. They're so far away from your operational region that the latency is terrible. You obviously don't want them to quit. If only there was an easy way of **extending your global reach**? There is!
cost-management-billing Switch Azure Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/switch-azure-offer.md
You might not see the **Switch Offer** option if:
* To switch offer from a different subscription, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). * You're still in your first billing period; you must wait for your first billing period to end before you can switch offers.
-### Why do I see "There are no offers available in your region or country at this time"?
+### Why do I see "There are no offers available in your country/region at this time"?
* You might not be eligible for any offer switches. Check the [list of available offers you can switch to](#whats-supported) and make sure that you've activated the right benefits with Visual Studio or Bizspark. * Some offers may not be available in all countries/regions.
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-azure-sign-up.md
Virtual or prepaid credit cards aren't accepted as payment for Azure subscriptio
#### Credit card form doesn't support my billing address
-Your billing address must be in the country that you select in the **About you** section. Verify that you have selected the correct country.
+Your billing address must be in the country/region that you select in the **About you** section. Verify that you have selected the correct country/region.
#### Progress bar hangs in identity verification by card section
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
Title: Release notes for Microsoft Azure Data Manager for Agriculture Preview #Required; page title is displayed in search results. Include the brand.
-description: This article provides release notes for Azure Data Manager for Agriculture Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results.
---- Previously updated : 04/14/2023 #Required; mm/dd/yyyy format.-
+ Title: Release notes for Microsoft Azure Data Manager for Agriculture Preview
+description: This article provides release notes for Azure Data Manager for Agriculture Preview releases, improvements, bug fixes, and known issues.
++++ Last updated : 06/09/2023 + # Release Notes for Azure Data Manager for Agriculture Preview
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
- Deprecated functionality - Plans for changes
- We'll provide information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Agriculture Preview monthly.
+ We provide information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Agriculture Preview monthly.
> [!NOTE] > Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). See Azure Data Manager for Agriculture specific terms of use [**here**](supplemental-terms-azure-data-manager-for-agriculture.md).
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
### Key Announcement: Preview Release Azure Data Manager for Agriculture is now available in preview. See our blog post [here](https://azure.microsoft.com/blog/announcing-microsoft-azure-data-manager-for-agriculture-accelerating-innovation-across-the-agriculture-value-chain/).
+## April 2023
+ ### Audit logs In Azure Data Manager for Agriculture Preview, you can monitor how and when your resources are accessed, and by whom. You can also debug reasons for failure for data-plane requests. [Audit Logs](how-to-set-up-audit-logs.md) are now available for your use.
You can connect to Azure Data Manager for Agriculture service from your virtual
### BYOL for satellite imagery To support scalable ingestion of geometry-clipped imagery, we've partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. Read more about our satellite connector [here](concepts-ingest-satellite-imagery.md).
+## May 2023
+
+### Understanding throttling
+Azure Data Manager for Agriculture implements API throttling to ensure consistent performance by limiting the number of requests within a specified time frame. Throttling prevents resource overuse and maintains optimal performance and reliability for all customers. Details are available [here](concepts-understanding-throttling.md).
+ ## Next steps * See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md). * Understand our APIs [here](/rest/api/data-manager-for-agri).
databox-online Azure Stack Edge Add Hardware Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-add-hardware-terms.md
Customer will be responsible for a one-time metered shipping fee for the shipmen
### Responsibilities if a Government Customer Moves an Azure Stack Edge Device between CustomerΓÇÖs Locations
-Government Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Azure Stack Edge Device beyond the country border in which Customer receives the Azure Stack Edge Device. For clarity, but not limited to, if a government Customer is in possession of an Azure Stack Edge Device, only the government Customer may, at government CustomerΓÇÖs sole risk and expense, transport the Azure Stack Edge Device to its different locations in accordance with this section and the requirements of the Additional Terms. Customer is responsible for obtaining at CustomerΓÇÖs own risk and expense any export license, import license and other official authorization for the exportation and importation of the Azure Stack Edge Device and CustomerΓÇÖs data to any different Customer location. Customer shall also be responsible for customs clearance to any different Customer location, and will bear all duties, taxes, and other official charges payable upon importation as well as all costs and risks of carrying out customs formalities in a timely manner.
+Government Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Azure Stack Edge Device beyond the country/region border in which Customer receives the Azure Stack Edge Device. For clarity, but not limited to, if a government Customer is in possession of an Azure Stack Edge Device, only the government Customer may, at government CustomerΓÇÖs sole risk and expense, transport the Azure Stack Edge Device to its different locations in accordance with this section and the requirements of the Additional Terms. Customer is responsible for obtaining at CustomerΓÇÖs own risk and expense any export license, import license and other official authorization for the exportation and importation of the Azure Stack Edge Device and CustomerΓÇÖs data to any different Customer location. Customer shall also be responsible for customs clearance to any different Customer location, and will bear all duties, taxes, and other official charges payable upon importation as well as all costs and risks of carrying out customs formalities in a timely manner.
-If Customer transports the Azure Stack Edge Device to a different location, Customer agrees to return the Azure Stack Edge Device to the country location where Customer received it initially, prior to shipping the Azure Stack Edge Device back to Microsoft. Customer acknowledges that there are inherent risks in shipping data on and in connection with the Azure Stack Edge Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to an Azure Stack Edge Device or any data stored on one, including during transit. It is CustomerΓÇÖs responsibility to obtain the appropriate support agreement from Microsoft to meet CustomerΓÇÖs operating objectives for the Azure Stack Edge Device; however, depending on the location to which Customer intends to move the Azure Stack Edge Device, MicrosoftΓÇÖs ability to provide hardware servicing and support may be delayed, or may not be available.
+If Customer transports the Azure Stack Edge Device to a different location, Customer agrees to return the Azure Stack Edge Device to the country/region location where Customer received it initially, prior to shipping the Azure Stack Edge Device back to Microsoft. Customer acknowledges that there are inherent risks in shipping data on and in connection with the Azure Stack Edge Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to an Azure Stack Edge Device or any data stored on one, including during transit. It is CustomerΓÇÖs responsibility to obtain the appropriate support agreement from Microsoft to meet CustomerΓÇÖs operating objectives for the Azure Stack Edge Device; however, depending on the location to which Customer intends to move the Azure Stack Edge Device, MicrosoftΓÇÖs ability to provide hardware servicing and support may be delayed, or may not be available.
## Fees
databox-online Azure Stack Edge Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-prep.md
To create a new Azure Stack Edge resource for an existing device, take the follo
1. Select **+ Create a resource**. Search for and select **Azure Stack Edge**. Then select **Create**.
-1. Select the subscription for the Azure Stack Edge Pro FPGA device and the country to ship the device to in **Ship to**.
+1. Select the subscription for the Azure Stack Edge Pro FPGA device and the country/region to ship the device to in **Ship to**.
- ![Select the subscription and ship-to country for your device](media/azure-stack-edge-deploy-prep/create-fpga-existing-resource-01.png)
+ ![Select the subscription and ship-to country/region for your device](media/azure-stack-edge-deploy-prep/create-fpga-existing-resource-01.png)
1. In the list of device types that is displayed, select **Azure Stack Edge Pro - FPGA**. Then choose **Select**.
databox-online Azure Stack Edge Gpu Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-data-residency.md
This article describes the information that you need to help understand the data
## About data residency for Azure Stack Edge
-Azure Stack Edge services uses [Azure Regional Pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) when storing and processing customer data in all the geos where the service is available. For the Southeast Asia (Singapore) region, the service is currently paired with Hong Kong. The Azure region pairing implies that any data stored in Singapore is replicated in Hong Kong. Singapore has laws in place that require that the customer data not leave the country boundaries.
+Azure Stack Edge services uses [Azure Regional Pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) when storing and processing customer data in all the geos where the service is available. For the Southeast Asia (Singapore) region, the service is currently paired with Hong Kong. The Azure region pairing implies that any data stored in Singapore is replicated in Hong Kong. Singapore has laws in place that require that the customer data not leave the country/region boundaries.
To ensure that the customer data resides in a single region only, a new option is enabled in the Azure Stack Edge service. This option when selected, lets the service store and process the customer data only in Singapore region. The customer data is not replicated to Hong Kong. There is service-specific metadata (which is not sensitive data) that is still replicated to the paired region.
databox-online Azure Stack Edge Mini R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-technical-specifications-compliance.md
The following table shows the power supply unit specifications:
The Azure Stack Edge Mini R device also includes an onboard battery that is charged by the power supply.
-An additional [Type 2590 battery](https://www.bren-tronics.com/bt-70791ck.html) can be used along with the onboard battery to extend the use of the device between the charges. This battery should be compliant with all the safety, transportation, and environmental regulations applicable in the country of use.
+An additional [Type 2590 battery](https://www.bren-tronics.com/bt-70791ck.html) can be used along with the onboard battery to extend the use of the device between the charges. This battery should be compliant with all the safety, transportation, and environmental regulations applicable in the country/region of use.
| Specification | Value | |--||
databox Data Box Hardware Additional Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-hardware-additional-terms.md
Alternatively, Customer may elect to use CustomerΓÇÖs designated carrier or Cust
## Responsibilities if Customer Moves a Data Box Device between Locations While Customer is in possession of a Data Box Device, Customer may, at its sole risk and expense, transport the Data Box Device to its domestic locations, and international locations as permitted by Microsoft in writing, for use to upload its data in accordance with this section and the requirements of the Additional Terms.
-If Customer wishes to move a Data Box Device to another country, then Customer must be the exporter of record from the country of export and importer of record into the country where the Data Box Device is being imported. Customer is responsible for obtaining, at its own risk and expense, any export license, import license and other official authorization for the exportation and importation of the Data Box Device and CustomerΓÇÖs data to any such different Customer location. Customer shall also be responsible for customs clearance at any such different Customer location, and will bear all duties, taxes, fines, penalties (if applicable) and all charges payable for exporting and importing the Data Box Device, as well as any and all costs and risks of carrying out customs formalities in a timely manner. Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Data Box Device beyond the country border in which Customer receives the Data Box Device. Additionally, if Customer transports the Data Box Device to a different country, prior to shipping the Data Box Device back to the original point of origin, whether a specified Microsoft entity or a Designated Azure Data Center, Customer agrees to return the Data Box Device to the country location where Customer initially received the Data Box Device. If requested, Microsoft may provide MicrosoftΓÇÖs estimated value of the Data Box Device as supplied by Microsoft to Customer and share available product certifications for the Data Box Device.
+If Customer wishes to move a Data Box Device to another country/region, then Customer must be the exporter of record from the country/region of export and importer of record into the country/region where the Data Box Device is being imported. Customer is responsible for obtaining, at its own risk and expense, any export license, import license and other official authorization for the exportation and importation of the Data Box Device and CustomerΓÇÖs data to any such different Customer location. Customer shall also be responsible for customs clearance at any such different Customer location, and will bear all duties, taxes, fines, penalties (if applicable) and all charges payable for exporting and importing the Data Box Device, as well as any and all costs and risks of carrying out customs formalities in a timely manner. Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Data Box Device beyond the country/region border in which Customer receives the Data Box Device. Additionally, if Customer transports the Data Box Device to a different country/region, prior to shipping the Data Box Device back to the original point of origin, whether a specified Microsoft entity or a Designated Azure Data Center, Customer agrees to return the Data Box Device to the country/region location where Customer initially received the Data Box Device. If requested, Microsoft may provide MicrosoftΓÇÖs estimated value of the Data Box Device as supplied by Microsoft to Customer and share available product certifications for the Data Box Device.
## Next steps
databox Data Box Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-overview.md
Throughout the export process, you are notified via email on all status changes.
## Region availability
-Data Box can transfer data based on the region in which service is deployed, the country or region you ship the device to, and the target storage account where you transfer the data.
+Data Box can transfer data based on the region in which service is deployed, the country/region you ship the device to, and the target storage account where you transfer the data.
### For import
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Learn which recommendations are in each security control in [Security controls a
### Recommendations page has new filters for environment, severity, and available responses
-Azure Security Center monitors all connected resources and generates security recommendations. Use these recommendations to strengthen your hybrid cloud posture and track compliance with the policies and standards relevant to your organization, industry, and country.
+Azure Security Center monitors all connected resources and generates security recommendations. Use these recommendations to strengthen your hybrid cloud posture and track compliance with the policies and standards relevant to your organization, industry, and country/region.
As Security Center continues to expand its coverage and features, the list of security recommendations is growing every month. For example, see [29 preview recommendations added to increase coverage of Azure Security Benchmark](release-notes-archive.md#29-preview-recommendations-added-to-increase-coverage-of-azure-security-benchmark).
event-grid Mqtt Automotive Connectivity And Data Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-automotive-connectivity-and-data-solution.md
The high level architecture diagram shows the main logical blocks and services o
* The **data & analytics services** provides data storage and enables processing and analytics for all data users. It turns data into insights that drive better business decisions. * The vehicle manufacturer provides **digital services** as value add to the end customer, from companion apps to repair and maintenance applications. * Several digital services require **business integration** to backend systems such as Dealer Management (DMS), Customer Relationship Management (CRM) or Enterprise Resource Planning (ERP) systems.
-* The **consent management** backend is part of customer management and keeps track of user authorization for data collection according to geographical region and country legislation.
+* The **consent management** backend is part of customer management and keeps track of user authorization for data collection according to geographical country/region legislation.
* Data collected from vehicles is an input to the **digital engineering** process, with the goal of continuous product improvements using analytics and machine learning. * The **smart mobility ecosystem** can subscribe and consume both live telemetry as well as aggregated insights to provide more products and services.
This reference architecture allows automotive manufacturers and mobility provide
* Use feedback data as part of the **digital engineering** process to drive continuous product improvement, proactively address root causes of problems and create new customer value. * Provide new **digital products and services** and digitalize operations with **business integration** with back-end systems like Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM).
-* Share data securely and addressing country-specific requirements for user consent with the broader **smart Mobility ecosystems**.
+* Share data securely and addressing country/region-specific requirements for user consent with the broader **smart Mobility ecosystems**.
* Integrate with back-end systems for vehicle lifecycle management and consent management simplifies and accelerate the deployment and management of connected vehicle solutions using a **Software Defined Vehicle DevOps Toolchain**. * Store and provide compute at scale for **vehicle and analytics**. * Manage **vehicle connectivity** to millions of devices in a cost-effective way.
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
You can associate [multiple public IP addresses](deploy-multi-public-ip-powershe
This enables the following scenarios: - **DNAT** - You can translate multiple standard port instances to your backend servers. For example, if you have two public IP addresses, you can translate TCP port 3389 (RDP) for both IP addresses.-- **SNAT** - More ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. At this time, Azure Firewall randomly selects the source public IP address to use for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration.
+- **SNAT** - More ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. Azure Firewall uses the primary public IP address first before it uses the other associated public IP addresses for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration.
## Azure Monitor logging
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md
You can use a match condition to:
::: zone pivot="front-door-standard-premium"
-* Filter requests based on a specific IP address, port, country, or region.
+* Filter requests based on a specific IP address, port, or country/region.
* Filter requests by header information. * Filter requests from mobile devices or desktop devices. * Filter requests from request file name and file extension.
You can use a match condition to:
::: zone pivot="front-door-classic"
-* Filter requests based on a specific IP address, country, or region.
+* Filter requests based on a specific IP address, or country/region.
* Filter requests by header information. * Filter requests from mobile devices or desktop devices. * Filter requests from request file name and file extension.
governance General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/troubleshoot/general.md
errors:
#### Cause
-This error occurs when _add-pod-identity_ is installed on the cluster and the _kube-system_ pods
+This error occurs when _aad-pod-identity_ is installed on the cluster and the _kube-system_ pods
aren't excluded in _aad-pod-identity_. The _aad-pod-identity_ component Node Managed Identity (NMI) pods modify the nodes' iptables to
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for adviso
- microsoft.web/sites/slots/config/web - microsoft.web/sites/workflows
+## authorizationresources
+
+- microsoft.authorization/roleassignments
+- microsoft.authorization/roledefinitions
+- microsoft.authorization/classicadministrators
+ ## chaosresources - microsoft.chaos/experiments/statuses
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.AppPlatform/Spring (Azure Spring Cloud) - microsoft.archive/collections - microsoft.Attestation/attestationProviders (Attestation providers)-- microsoft.authorization/elevateaccessroleassignment-- microsoft.Authorization/resourceManagementPrivateLinks (Resource management private links) - microsoft.automanage/accounts - microsoft.automanage/configurationprofilepreferences - microsoft.automanage/configurationprofiles
For sample queries for this table, see [Resource Graph sample queries for resour
For sample queries for this table, see [Resource Graph sample queries for securityresources](../samples/samples-by-table.md#securityresources). -- microsoft.authorization/locks/providers/assessments/governanceassignments-- microsoft.authorization/roleassignments/providers/assessments/governanceassignments - microsoft.security/assessments - Sample query: [Count healthy, unhealthy, and not applicable resources per recommendation](../samples/samples-by-category.md#count-healthy-unhealthy-and-not-applicable-resources-per-recommendation) - Sample query: [List Container Registry vulnerability assessment results](../samples/samples-by-category.md#list-container-registry-vulnerability-assessment-results)
hdinsight Apache Hadoop Hive Pig Udf Dotnet Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-hive-pig-udf-dotnet-csharp.md
description: Learn how to use C# user-defined functions (UDF) with Apache Hive a
Previously updated : 05/30/2022 Last updated : 06/09/2023 # Use C# user-defined functions with Apache Hive and Apache Pig on Apache Hadoop in HDInsight
hdinsight Apache Hbase Phoenix Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-phoenix-psql.md
description: Use the psql tool to load bulk load data into Apache Phoenix tables
Previously updated : 05/30/2022 Last updated : 06/09/2023 # Bulk load data into Apache Phoenix using psql
hdinsight Hbase Troubleshoot Bindexception Address Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-bindexception-address-use.md
Title: BindException - Address already in use in Azure HDInsight
description: BindException - Address already in use in Azure HDInsight Previously updated : 05/30/2022 Last updated : 06/09/2023 # Scenario: BindException - Address already in use in Azure HDInsight
hdinsight Hdinsight Sdk Java Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-java-samples.md
description: Find Java examples on GitHub for common tasks using the HDInsight S
Previously updated : 06/08/2023 Last updated : 06/09/2023 # Azure HDInsight: Java samples
hdinsight Hdinsight Sdk Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-python-samples.md
Title: 'Azure HDInsight: Python samples'
description: Find Python examples on GitHub for common tasks using the HDInsight SDK for Python. Previously updated : 05/30/2022 Last updated : 06/09/2023
hdinsight Overview Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/overview-azure-storage.md
description: Overview of Azure Storage in HDInsight.
Previously updated : 05/30/2022 Last updated : 06/08/2023 # Azure Storage overview in HDInsight
hdinsight Apache Spark Eclipse Tool Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-eclipse-tool-plugin.md
description: Use HDInsight Tools in Azure Toolkit for Eclipse to develop Spark a
Previously updated : 05/30/2022 Last updated : 06/09/2023 # Use Azure Toolkit for Eclipse to create Apache Spark applications for an HDInsight cluster
hdinsight Apache Spark Intellij Tool Debug Remotely Through Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh.md
description: Step-by-step guidance on how to use HDInsight Tools in Azure Toolki
Previously updated : 05/30/2022 Last updated : 06/09/2023 # Debug Apache Spark applications on an HDInsight cluster with Azure Toolkit for IntelliJ through SSH
hdinsight Apache Spark Intellij Tool Plugin Debug Jobs Remotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-plugin-debug-jobs-remotely.md
description: Learn how to use HDInsight Tools in Azure Toolkit for IntelliJ to r
Previously updated : 05/30/2022 Last updated : 06/09/2023 # Use Azure Toolkit for IntelliJ to debug Apache Spark applications remotely in HDInsight through VPN
hdinsight Apache Spark Intellij Tool Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-plugin.md
If you're not going to continue to use this application, delete the cluster that
:::image type="content" source="./media/apache-spark-intellij-tool-plugin/hdinsight-azure-portal-delete-cluster.png " alt-text="Azure portal deletes HDInsight cluster" border="true":::
+## Errors and solution
+
+Unmark the src folder as **Sources** if you get build failed errors as below:
++
+Unmark the src folder as **Sources** to solution this issue:
+
+1. Navigate to **File** and select the **Project Structure**.
+2. Select the **Modules** under the Project Settings.
+3. Select the **src** file and unmark as **Sources**.
+4. Click on Apply button and then click on OK button to close the dialog.
+
+ :::image type="content" source="./media/apache-spark-intellij-tool-plugin/unmark-src-as-sources.png" alt-text="Screenshot showing the unmark the src as sources." border="true":::
+ ## Next steps In this article, you learned how to use the Azure Toolkit for IntelliJ plug-in to develop Apache Spark applications written in [Scala](https://www.scala-lang.org/). Then submitted them to an HDInsight Spark cluster directly from the IntelliJ integrated development environment (IDE). Advance to the next article to see how the data you registered in Apache Spark can be pulled into a BI analytics tool such as Power BI.
healthcare-apis Dicom Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md
The DICOM service is a managed service within [Azure Health Data Services](../he
- **Change Feed**: Access ordered, guaranteed, immutable, read-only logs of all the changes that occur in DICOM service. Client applications can read these logs at any time independently, in parallel and at their own pace. - **DICOMcast**: Via DICOMcast, the DICOM service can inject DICOM metadata into a FHIR service, or FHIR server, as an imaging study resource allowing a single source of truth for both clinical data and imaging metadata. This feature is available as an open-source feature that can be self-hosted in Azure. Learn more about [deploying DICOMcast](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md). - **Region availability**: DICOM service has wide-range of [availability across many regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir&regions=all) with multi-region failover protection and continuously expanding.-- **Scalability**: DICOM service is designed out-of-the-box to support different workload levels at a hospital, region, country and global scale without sacrificing any performance spec by using autoscaling features.
+- **Scalability**: DICOM service is designed out-of-the-box to support different workload levels at a hospital, country/region, and global scale without sacrificing any performance spec by using autoscaling features.
- **Role-based access**: You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment. [Open-source DICOM-server project](https://github.com/microsoft/dicom-server) is also constantly monitored for feature parity with managed service so that developers can deploy open source version as a set of Docker containers to speed up development and test in their environments, and contribute to potential future managed service features.
FHIR&trade; is becoming an important standard for clinical data and provides ext
- **Creating cohorts for research**: Often through queries for patients that match data in both clinical and imaging systems, such as this one (which triggered the effort to integrate FHIR&trade; and DICOM data): ΓÇ£Give me all the medications prescribed with all the CT scan documents and their associated radiology reports for any patient older than 45 that has had a diagnosis of osteosarcoma over the last two years.ΓÇ¥ - **Finding outcomes for similar patients to understand options and plan treatments**: When presented with a patient diagnosis, a physician can identify patient outcomes and treatment plans for past patients with a similar diagnosis, even when these include imaging data. - **Providing a longitudinal view of a patient during diagnosis**: Radiologists, especially teleradiologists, often don't have complete access to a patientΓÇÖs medical history and related imaging studies. Through FHIR&trade; integration, this data can be easily provided, even to radiologists outside of the organizationΓÇÖs local network.-- **Closing the feedback loop with teleradiologists**: Ideally a radiologist has access to a hospitalΓÇÖs clinical data to close the feedback loop after making a recommendation. However for teleradiologists, this is often not the case. Instead, they're often unable to close the feedback loop after performing a diagnosis, since they don't have access to patient data after the initial read. With no (or limited) access to clinical results or outcomes, they canΓÇÖt get the feedback necessary to improve their skills. As one teleradiologist put it: ΓÇ£Take parathyroid for example. We do more than any other clinic in the country, and yet I have to beg and plead for surgeons to tell me what they actually found. Out of the more than 500 studies I do each month, I get direct feedback on only three or four.ΓÇ¥ Through integration with FHIR&trade;, an organization can easily create a tool that will provide direct feedback to teleradiologists, helping them to hone their skills and make better recommendations in the future.
+- **Closing the feedback loop with teleradiologists**: Ideally a radiologist has access to a hospitalΓÇÖs clinical data to close the feedback loop after making a recommendation. However for teleradiologists, this is often not the case. Instead, they're often unable to close the feedback loop after performing a diagnosis, since they don't have access to patient data after the initial read. With no (or limited) access to clinical results or outcomes, they canΓÇÖt get the feedback necessary to improve their skills. As one teleradiologist put it: ΓÇ£Take parathyroid for example. We do more than any other clinic in the country/region, and yet I have to beg and plead for surgeons to tell me what they actually found. Out of the more than 500 studies I do each month, I get direct feedback on only three or four.ΓÇ¥ Through integration with FHIR&trade;, an organization can easily create a tool that will provide direct feedback to teleradiologists, helping them to hone their skills and make better recommendations in the future.
- **Closing the feedback loop for AI/ML models**: Machine learning models do best when real-world feedback can be used to improve their models. However, third-party ML model providers rarely get the feedback they need to improve their models over time. For instance, one ISV put it this way: ΓÇ£We use a combination of machine models and human experts to recommend a treatment plan for heart surgery. However, we only rarely get feedback from physicians on how accurate our plan was. For instance, we often recommend a stent size. WeΓÇÖd love to get feedback on if our prediction was correct, but the only time we hear from customers is when thereΓÇÖs a major issue with our recommendations.ΓÇ¥ As with feedback for teleradiologists, integration with FHIR&trade; allows organizations to create a mechanism to provide feedback to the model retraining pipeline. ## Deploy DICOM service to Azure
key-vault Create Certificate Signing Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate-signing-request.md
The certificate request has now been successfully merged.
## Add more information to the CSR If you want to add more information when creating the CSR, define it in **SubjectName**. You might want to add information such as:-- Country
+- Country/region
- City/locality - State/province - Organization
lab-services Lab Services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md
Lab accounts and labs have a parental relationship. Moving to a sibling relatio
|-|-|-| |Resource Management|Lab account was the only resource tracked in the Azure portal. All other resources were child resources of the lab account and tracked in Lab Services directly.|Lab plans and labs are now sibling resources in Azure. Administrators can use existing tools in the Azure portal to manage labs. Virtual machines will continue to be a child resource of labs.| |Cost tracking|In Azure Cost Management, admins could only track and analyze cost at the service level and at the lab account level.| Cost entries in Azure Cost Management are now for lab virtual machines. Automatic tags on each entry specify the lab plan ID and the lab name. You can analyze cost by lab plan, lab, or virtual machine from within the Azure portal. Custom tags on the lab will also show in the cost data.|
-|Selecting regions|By default, labs were created in the same geography as the lab account. A geography typically aligns with a country and contains one or more Azure regions. Lab owners weren't able to manage exactly which Azure region the labs resided in.|In the lab plan, administrators now can manage the exact Azure regions allowed for lab creation. By default, labs will be created in the same Azure region as the lab plan. </br> Note, when a lab plan has advanced networking enabled, labs are created in the same Azure region as virtual network.|
+|Selecting regions|By default, labs were created in the same geography as the lab account. A geography typically aligns with a country/region and contains one or more Azure regions. Lab owners weren't able to manage exactly which Azure region the labs resided in.|In the lab plan, administrators now can manage the exact Azure regions allowed for lab creation. By default, labs will be created in the same Azure region as the lab plan. </br> Note, when a lab plan has advanced networking enabled, labs are created in the same Azure region as virtual network.|
|Deletion experience|When a lab account is deleted, all labs within it are also deleted.|When deleting a lab plan, labs *aren't* deleted. After a lab plan is deleted, labs will keep references to their virtual network even if advanced networking is enabled. However, if a lab plan was connected to an Azure Compute Gallery, the labs can no longer export an image to that Azure Compute Gallery.| |Connecting to a virtual network|The lab account provided an option to peer to a virtual network. If you already had labs in the lab account before you peered to a virtual network, the virtual network connection didn't apply to existing labs. Admins couldn't tell which labs in the lab account were peered to the virtual network.|In a lab plan, admins set up the advanced networking only at the time of lab plan creation. Once a lab plan is created, you'll see a read-only connection to the virtual network. If you need to use another virtual network, create a new lab plan configured with the new virtual network.| |Labs portal experience|Labs are listed under lab accounts in [https://labs.azure.com](https://labs.azure.com).|Labs are listed under resource group name in [https://labs.azure.com](https://labs.azure.com). If there are multiple lab plans in the same resource group, educators can choose which lab plan to use when creating the lab.|
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
# What is automated machine learning (AutoML)? [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](./v1/concept-automated-ml.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](concept-automated-ml.md)
Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML in Azure Machine Learning is based on a breakthrough from our [Microsoft Research division](https://www.microsoft.com/research/project/automl/).
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
# Data concepts in Azure Machine Learning
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you use:"]
-> * [v1](./v1/concept-data.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](concept-data.md)
With Azure Machine Learning, you can bring data from a local machine or an existing cloud-based storage. In this article, you'll learn the main Azure Machine Learning data concepts.
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform that you're using:"]
-> * [v1](v1/concept-mlflow.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](concept-mlflow.md)
[MLflow](https://www.mlflow.org) is an open-source framework that's designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows you to use a consistent set of tools regardless of where your experiments are running: locally on your computer, on a remote compute target, on a virtual machine, or on an Azure Machine Learning compute instance.
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Last updated 01/04/2023
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
-> * [v1](./v1/concept-model-management-and-deployment.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](concept-model-management-and-deployment.md)
In this article, learn how to apply Machine Learning Operations (MLOps) practices in Azure Machine Learning for the purpose of managing the lifecycle of your models. Applying MLOps practices can improve the quality and consistency of your machine learning solutions.
machine-learning Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md
Title: 'Workspace soft-deletion'
+ Title: 'Workspace soft deletion'
-description: Soft-delete allows you to recover workspace data after accidental deletion
+description: Soft delete allows you to recover workspace data after accidental deletion
Last updated 11/07/2022
-monikerRange: 'azureml-api-2 || azureml-api-1'
+monikerRange: 'azureml-api-2'
#Customer intent: As an IT pro, understand how to enable data protection capabilities, to protect against accidental deletion.
-# Recover workspace data after accidental deletion with soft delete (Preview)
+# Recover workspace data while soft deleted
-The soft-delete feature for Azure Machine Learning workspace provides a data protection capability that enables you to attempt recovery of workspace data after accidental deletion. Soft delete introduces a two-step approach in deleting a workspace. When a workspace is deleted, it's first soft deleted. While in soft-deleted state, you can choose to recover or permanently delete a workspace and its data during a data retention period.
-
-> [!IMPORTANT]
-> Workspace soft delete is currently in public preview and will become general available on June 9th 2023. The preview is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> To enroll your Azure Subscription, see [Register soft-delete on an Azure subscription](#register-soft-delete-on-an-azure-subscription).
+The soft delete feature for Azure Machine Learning workspace provides a data protection capability that enables you to attempt recovery of workspace data after accidental deletion. Soft delete introduces a two-step approach in deleting a workspace. When a workspace is deleted, it's first soft deleted. While in soft-deleted state, you can choose to recover or permanently delete a workspace and its data during a data retention period.
## How workspace soft delete works
-When a workspace is soft-deleted, data and metadata stored service-side get soft-deleted, but some configurations get hard-deleted. Below table provides an overview of which configurations and objects get soft-deleted, and which are hard-deleted.
+When a workspace is soft deleted, data and metadata stored service-side get soft deleted, but some configurations get hard deleted. Below table provides an overview of which configurations and objects get soft deleted, and which are hard deleted.
-Data / configuration | Soft-deleted | Hard-deleted
+Data / configuration | Soft deleted | Hard deleted
|| Run History | Γ£ô | Models | Γ£ô |
Linked Databricks workspaces | | Γ£ô*
\* *Microsoft attempts recreation or reattachment when a workspace is recovered. Recovery isn't guaranteed, and a best effort attempt.*
-After soft-deletion, the service keeps necessary data and metadata during the recovery [retention period](#soft-delete-retention-period). When the retention period expires, or in case you permanently delete a workspace, data and metadata will be actively deleted.
+After soft deletion, the service keeps necessary data and metadata during the recovery [retention period](#soft-delete-retention-period). When the retention period expires, or in case you permanently delete a workspace, data and metadata will be actively deleted.
-## Soft-delete retention period
+## Soft delete retention period
-A default retention period of 14 days holds for deleted workspaces. The retention period indicates how long workspace data remains available after it's deleted. The clock starts on the retention period as soon as a workspace is soft-deleted.
+A default retention period of 14 days holds for deleted workspaces. The retention period indicates how long workspace data remains available after it's deleted. The clock starts on the retention period as soon as a workspace is soft deleted.
-During the retention period, soft-deleted workspaces can be recovered or permanently deleted. Any other operations on the workspace, like submitting a training job, will fail. You can't reuse the name of a workspace that has been soft-deleted until the retention period has passed. Once the retention period elapses, a soft deleted workspace automatically gets permanently deleted.
+During the retention period, soft deleted workspaces can be recovered or permanently deleted. Any other operations on the workspace, like submitting a training job, will fail.
-> [!TIP]
-> During preview of workspace soft-delete, the retention period is fixed to 14 days and can't be modified.
+> [!IMPORTANT]
+> You can't reuse the name of a workspace that has been soft deleted until the retention period has passed or the workspace is permanently deleted. Once the retention period elapses, a soft deleted workspace automatically gets permanently deleted.
## Deleting a workspace
-The default deletion behavior when deleting a workspace is soft delete. Optionally, you may permanently delete a workspace going to soft delete state first by checking __Delete the workspace permanently__ in the Azure portal. Permanently deleting workspaces can only be done one workspace at time, and not using a batch operation.
+The default deletion behavior when deleting a workspace is soft delete. Optionally, you may override the soft delete behavior by permanently deleting your workspace. Permanently deleting a workspace ensures workspace data is immediately deleted. Use this option to meet related compliance requirements, or whenever you require a workspace name to be reused immediately after deletion. This may be useful in dev/test scenarios where you want to create and later delete a workspace.
-Permanently deleting a workspace allows a workspace name to be reused immediately after deletion. This behavior may be useful in dev/test scenarios where you want to create and later delete a workspace. Permanently deleting a workspace may also be required for compliance if you manage highly sensitive data. See [General Data Protection Regulation (GDPR) implications](#general-data-protection-regulation-gdpr-implications) to learn more on how deletions are handled when soft delete is enabled.
+When deleting a workspace from the Azure Portal, check __Delete the workspace permanently__. You can permanently delete only one workspace at a time, and not using a batch operation.
:::image type="content" source="./media/concept-soft-delete/soft-delete-permanently-delete.png" alt-text="Screenshot of the delete workspace form in the portal.":::
-## Manage soft-deleted workspaces
+If you are using the [Azure Machine Learning SDK or CLI](https://learn.microsoft.com/python/api/azure-ai-ml/azure.ai.ml.operations.workspaceoperations#azure-ai-ml-operations-workspaceoperations-begin-delete), you can set the `permanently_delete` flag.
-Soft-deleted workspaces can be managed under the Azure Machine Learning resource provider in the Azure portal. To list soft-deleted workspaces, use the following steps:
+```python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
-1. From the [Azure portal](https://portal.azure.com), select __More services__. From the __AI + machine learning__ category, select __Azure Machine Learning__.
-1. From the top of the page, select __Recently deleted__ to view workspaces that were soft-deleted and are still within the retention period.
-
- :::image type="content" source="./media/concept-soft-delete/soft-delete-manage-recently-deleted.png" alt-text="Screenshot highlighting the recently deleted link.":::
+ml_client = MLClient(
+ DefaultAzureCredential(),
+ subscription_id="<SUBSCRIPTION_ID>",
+ resource_group_name="<RESOURCE_GROUP>"
+)
-1. From the recently deleted workspaces view, you can recover or permanently delete a workspace.
+result = ml_client.workspaces.begin_delete(
+ name="myworkspace",
+ permanently_delete=True,
+ delete_dependent_resources=False
+).result()
- :::image type="content" source="./media/concept-soft-delete/soft-delete-manage-recently-deleted-panel.png" alt-text="Screenshot of the recently deleted workspaces view.":::
+print(result)
+```
+Once permanently deleted, workspace data can no longer be recovered. Permanent deletion of workspace data is also triggered when the soft delete retention period expires.
-## Recover a soft-deleted workspace
+## Manage soft deleted workspaces
-When you select *Recover* on a soft-deleted workspace, it initiates an operation to restore the workspace state. The service attempts recreation or reattachment of a subset of resources, including Azure RBAC role assignments. Hard-deleted resources including compute clusters should be recreated by you.
+Soft deleted workspaces can be managed under the Azure Machine Learning resource provider in the Azure portal. To list soft deleted workspaces, use the following steps:
-Azure Machine Learning recovers Azure RBAC role assignments for the workspace identity, but doesn't recover role assignments you may have added for users or user groups. It may take up to 15 minutes for role assignments to propagate after workspace recovery.
+1. From the [Azure portal](https://portal.azure.com), select __More services__. From the __AI + machine learning__ category, select __Azure Machine Learning__.
+1. From the top of the page, select __Recently deleted__ to view workspaces that were soft-deleted and are still within the retention period.
-Recovery of a workspace may not always be possible. Azure Machine Learning stores workspace metadata on [other Azure resources associated with the workspace](concept-workspace.md#associated-resources). In the event these dependent Azure resources were deleted, it may prevent the workspace from being recovered or correctly restored. Dependencies of the Azure Machine Learning workspace must be recovered first, before recovering a deleted workspace. Azure Container Registry isn't a hard requirement required for recovery.
+ :::image type="content" source="./media/concept-soft-delete/soft-delete-manage-recently-deleted.png" alt-text="Screenshot highlighting the recently deleted link.":::
-Enable [data protection capabilities on Azure Storage](../storage/blobs/soft-delete-blob-overview.md) to improve chances of successful recovery.
+1. From the recently deleted workspaces view, you can recover or permanently delete a workspace.
-## Permanently delete a soft-deleted workspace
+ :::image type="content" source="./media/concept-soft-delete/soft-delete-manage-recently-deleted-panel.png" alt-text="Screenshot of the recently deleted workspaces view.":::
-When you select *Permanently delete* on a soft-deleted workspace, it triggers hard deletion of workspace data. Once deleted, workspace data can no longer be recovered. Permanent deletion of workspace data is also triggered when the soft delete retention period expires.
+## Recover a soft deleted workspace
-## Register soft-delete on an Azure subscription
+When you select *Recover* on a soft deleted workspace, it initiates an operation to restore the workspace state. The service attempts recreation or reattachment of a subset of resources, including Azure RBAC role assignments. Hard-deleted resources including compute clusters should be recreated by you.
-During the time of preview, workspace soft delete is enabled on an opt-in basis per Azure subscription. When soft delete is enabled for a subscription, it's enabled for all Azure Machine Learning workspaces in that subscription.
+Azure Machine Learning recovers Azure RBAC role assignments for the workspace identity, but doesn't recover role assignments you have added on the workspace. It may take up to 15 minutes for role assignments to propagate after workspace recovery.
-To enable workspace soft delete on your Azure subscription, [register the preview feature](../azure-resource-manager/management/preview-features.md?tabs=azure-portal#register-preview-feature) in the Azure portal. Select `wssoftdeete` or `Workspace soft delete` under the `Microsoft.MachineLearningServices` resource provider. It may take 15 minutes for the UX to appear in the Azure portal after registering your subscription.
+Recovery of a workspace may not always be possible. Azure Machine Learning stores workspace metadata on [other Azure resources associated with the workspace](concept-workspace.md#associated-resources). In the event these dependent Azure resources were deleted, it may prevent the workspace from being recovered or correctly restored. Dependencies of the Azure Machine Learning workspace must be recovered first, before recovering a deleted workspace. The following table outlines recovery options for each dependency of the Azure Machine Learning workspace.
-Before disabling workspace soft delete on an Azure subscription, purge or recover soft-deleted workspaces. After you disable soft delete on a subscription, workspaces that remain in soft deleted state are automatically purged when the retention period elapses.
+|Dependency|Recovery approach|
+|||
+|Azure Key Vault| [Recover a deleted Azure Key Vault instance](../key-vault/general/soft-delete-overview.md) |
+|Azure Storage|[Recover a deleted Azure storage account](../storage/common/storage-account-recover.md).|
+|Azure Container Registry|Azure Container Registry is not a hard requirement for workspace recovery. Azure Machine Learning can regenerate images for custom environments.|
+|Azure Application Insights| First, [recover your log analytics workspace](../azure-monitor/logs/delete-workspace.md). Then recreate an application insights with the original name.|
## Billing implications
-In general, when a workspace is in soft-deleted state, there are only two operations possible: 'permanently delete' and 'recover'. All other operations will fail. Therefore, even though the workspace exists, no compute operations can be performed and hence no usage will occur. When a workspace is soft-deleted, any cost-incurring resources including compute clusters are hard deleted.
+In general, when a workspace is in soft deleted state, there are only two operations possible: 'permanently delete' and 'recover'. All other operations will fail. Therefore, even though the workspace exists, no compute operations can be performed and hence no usage will occur. When a workspace is soft deleted, any cost-incurring resources including compute clusters are hard deleted.
> [!IMPORTANT]
-> Workspaces that use [customer-managed keys for encryption](concept-data-encryption.md) store additional service data in your subscription in a managed resource group. When a workspace is soft-deleted, the managed resource group and resources in it will not be deleted and will incur cost until the workspace is hard-deleted.
+> Workspaces that use [customer-managed keys for encryption](concept-data-encryption.md) store additional service data in your subscription in a managed resource group. When a workspace is soft deleted, the managed resource group and resources in it will not be deleted and will incur cost until the workspace is hard-deleted.
## General Data Protection Regulation (GDPR) implications
-After soft-deletion, the service keeps necessary data and metadata during the recovery [retention period](#soft-delete-retention-period). From a GDPR and privacy perspective, a request to delete personal data should be interpreted as a request for *permanent* deletion of a workspace and not soft delete.
+After soft deletion, the service keeps necessary data and metadata during the recovery [retention period](#soft-delete-retention-period). From a GDPR and privacy perspective, a request to delete personal data should be interpreted as a request for *permanent* deletion of a workspace and not soft delete.
When the retention period expires, or in case you permanently delete a workspace, data and metadata will be actively deleted. You could choose to permanently delete a workspace at the time of deletion.
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
ms.devlang: azurecli
# Train models with Azure Machine Learning [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [v1](v1/concept-train-machine-learning-model.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current)](concept-train-machine-learning-model.md)
Azure Machine Learning provides several ways to train your models, from code-first solutions using the SDK to low-code solutions such as automated machine learning and the visual designer. Use the following list to determine which training method is right for you:
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
with fs.open('./folder1/file1.csv') as f:
```python from azureml.fsspec import AzureMachineLearningFileSystem # instantiate file system using following URI
-fs = AzureMachineLearningFileSystem('azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastore/datastorename')
+fs = AzureMachineLearningFileSystem('azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastorename>/paths/')
# you can specify recursive as False to upload a file fs.upload(lpath='data/upload_files/crime-spring.csv', rpath='data/fsspec', recursive=False, **{'overwrite': 'MERGE_WITH_OVERWRITE'})
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
# Data administration
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](./v1/concept-network-data-access.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-administrate-data-authentication.md)
Learn how to manage data access and how to authenticate in Azure Machine Learning [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
show_latex: true
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning SDK you are using:"]
-> * [v1](./v1/how-to-auto-train-forecast.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-auto-train-forecast.md)
In this article, you'll learn how to set up AutoML training for time-series forecasting models with Azure Machine Learning automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ai-ml-readme).
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Last updated 07/13/2022
# Set up AutoML to train computer vision models [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/how-to-auto-train-image-models.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-auto-train-image-models.md)
In this article, you learn how to train computer vision models on image data with automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Last updated 03/15/2022
# Set up AutoML to train a natural language processing model [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"]
-> * [v1](./v1/how-to-auto-train-nlp-models.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-auto-train-nlp-models.md)
In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in Azure Machine Learning. You can create NLP models with automated ML via the Azure Machine Learning Python SDK v2 or the Azure Machine Learning CLI v2.
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
# Set up AutoML training with the Azure Machine Learning Python SDK v2 [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python you are using:"]
-> * [v1](./v1/how-to-configure-auto-train.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-configure-auto-train.md)
In this guide, learn how to set up an automated machine learning, AutoML, training job with the [Azure Machine Learning Python SDK v2](/python/api/overview/azure/ml/intro). Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/reference-azure-machine-learning-cli.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-configure-cli.md)
The `ml` extension to the [Azure CLI](/cli/azure/) is the enhanced interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle.
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
Last updated 08/29/2022
[!INCLUDE [CLI v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [CLI or SDK v1](v1/how-to-configure-private-link.md?view=azureml-api-1&preserve-view=true)
-> * [CLI v2 (current)](how-to-configure-private-link.md)
In this document, you learn how to configure a private endpoint for your Azure Machine Learning workspace. For information on creating a virtual network for Azure Machine Learning, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md).
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Last updated 10/19/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI or SDK version you are using:"]
-> * [v1](v1/how-to-create-attach-compute-cluster.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-create-attach-compute-cluster.md)
Learn how to create and manage a [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Last updated 06/02/2023
# Create and manage data assets [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](./v1/how-to-create-register-datasets.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-create-data-assets.md)
This article shows how to create and manage data assets in Azure Machine Learning.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Last updated 12/28/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [v1](v1/how-to-create-manage-compute-instance.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-create-manage-compute-instance.md)
Learn how to create and manage a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace.
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
# Create datastores
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
-> * [v1](v1/how-to-access-data.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-datastore.md)
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](./v1/how-to-deploy-mlflow-models.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-deploy-mlflow-models-online-endpoints.md)
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) for real-time inference. When you deploy your MLflow model to an online endpoint, you don't need to indicate a scoring script or an environment. This characteristic is referred as __no-code deployment__.
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
ms.devlang: azurecli
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](./v1/how-to-deploy-mlflow-models.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-deploy-mlflow-models.md)
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure Machine Learning for both real-time and batch inference. Learn also about the different tools you can use to perform management of the deployment.
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](./v1/how-to-use-managed-identities.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](./how-to-identity-based-service-authentication.md)
Azure Machine Learning is composed of multiple Azure services. There are multiple ways that authentication can happen between Azure Machine Learning and the services it relies on.
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
# Import data assets (preview) [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v2 ](how-to-import-data-assets.md)
- In this article, learn how to import data into the Azure Machine Learning platform from external sources. A successful import automatically creates and registers an Azure Machine Learning data asset with the name provided during the import. An Azure Machine Learning data asset resembles a web browser bookmark (favorites). You don't need to remember long storage paths (URIs) that point to your most-frequently used data. Instead, you can create a data asset, and then access that asset with a friendly name. A data import creates a cache of the source data, along with metadata, for faster and reliable data access in Azure Machine Learning training jobs. The data cache avoids network and connection constraints. The cached data is versioned to support reproducibility. This provides versioning capabilities for data imported from SQL Server sources. Additionally, the cached data provides data lineage for auditability. A data import uses ADF (Azure Data Factory pipelines) behind the scenes, which means that users can avoid complex interactions with ADF. Behind the scenes, Azure Machine Learning also handles management of ADF compute resource pool size, compute resource provisioning, and tear-down, to optimize data transfer by determining proper parallelization.
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/how-to-inference-onnx-automl-image-models.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-inference-onnx-automl-image-models.md)
In this article, you will learn how to use Open Neural Network Exchange (ONNX) to make predictions on computer vision models generated from automated machine learning (AutoML) in Azure Machine Learning.
machine-learning How To Launch Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-launch-vs-code-remote.md
You can create the connection from either the **Notebooks** or **Compute** secti
* Notebooks
- 1. Select the **Notebooks** tab
+ 1. Select the **Notebooks** tab.
1. In the *Notebooks* tab, select the file you want to edit.
+ 1. If the compute instance is stopped, select **Start compute** and wait until it is running.
+
+ :::image type="content" source="media/tutorial-azure-ml-in-a-day/start-compute.png" alt-text="Screenshot shows how to start compute if it is stopped." lightbox="media/tutorial-azure-ml-in-a-day/start-compute.png":::
+ 1. Select **Editors > Edit in VS Code (Web)**. :::image type="content" source="media/how-to-launch-vs-code-remote/edit-in-vs-code.png" alt-text="Screenshot of how to connect to Compute Instance VS Code (Web) Azure Machine Learning Notebook." lightbox="media/how-to-launch-vs-code-remote/edit-in-vs-code.png":::
You can create the connection from either the **Notebooks** or **Compute** secti
* Compute 1. Select the **Compute** tab
- 1. In the *Applications* column, select **VS Code (Web)** for the compute instance you want to connect to.
+ 1. If the compute instance you wish to use is stopped, select it and then select **Start**.
+ 1. Once the compute instance is running, in the *Applications* column, select **VS Code (Web)**.
:::image type="content" source="media/how-to-launch-vs-code-remote/vs-code-from-compute.png" alt-text="Screenshot of how to connect to Compute Instance VS Code Azure Machine Learning studio." lightbox="media/how-to-launch-vs-code-remote/vs-code-from-compute.png":::
You can create the connection from either the **Notebooks** or **Compute** secti
1. Select the **Notebooks** tab 1. In the *Notebooks* tab, select the file you want to edit.
+ 1. If the compute instance is stopped, select **Start compute** and wait until it is running.
+
+ :::image type="content" source="media/tutorial-azure-ml-in-a-day/start-compute.png" alt-text="Screenshot shows how to start compute if it is stopped." lightbox="media/tutorial-azure-ml-in-a-day/start-compute.png":::
+ 1. Select **Edit in VS Code (Desktop)**. :::image type="content" source="media/how-to-launch-vs-code-remote/edit-in-vs-code.png" alt-text="Screenshot of how to connect to Compute Instance VS Code Azure Machine Learning Notebook." lightbox="media/how-to-launch-vs-code-remote/edit-in-vs-code.png":::
You can create the connection from either the **Notebooks** or **Compute** secti
* Compute
- 1. Select the **Compute** tab
- 1. In the *Application URI* column, select **VS Code (Desktop)** for the compute instance you want to connect to.
+ 1. Select the **Compute** tab.
+ 1. If the compute instance you wish to use is stopped, select it and then select **Start**.
+ 1. Once the compute instance is running, in the *Applications* column, select **VS Code (Desktop)**.
:::image type="content" source="media/how-to-launch-vs-code-remote/studio-compute-instance-vs-code-launch.png" alt-text="Screenshot of how to connect to Compute Instance VS Code Azure Machine Learning studio." lightbox="media/how-to-launch-vs-code-remote/studio-compute-instance-vs-code-launch.png":::
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"]
-> * [v1](./v1/how-to-log-view-metrics.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current)](how-to-log-view-metrics.md)
Azure Machine Learning supports logging and tracking experiments using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). You can log models, metrics, parameters, and artifacts with MLflow as it supports local mode to cloud portability.
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](./v1/how-to-use-environments.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-manage-environments-v2.md)
machine-learning How To Manage Imported Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-imported-data-assets.md
# Manage imported data assets (preview) [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v2](how-to-import-data-assets.md)
- In this article, learn how to manage imported data assets from a life-cycle perspective. We learn how to modify or update auto-delete settings on the data assets imported on to a managed datastore (`workspacemanagedstore`) that Microsoft manages for the customer. > [!NOTE]
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](v1/how-to-manage-workspace-cli.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-manage-workspace-cli.md)
In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension to the CLI provides commands for working with Azure Machine Learning resources.
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](v1/how-to-manage-workspace.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current)](how-to-manage-workspace.md)
In this article, you create, view, and delete [**Azure Machine Learning workspaces**](concept-workspace.md) for [Azure Machine Learning](overview-what-is-azure-machine-learning.md), using the [Azure portal](https://portal.azure.com) or the [SDK for Python](https://aka.ms/sdk-v2-install).
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
Last updated 05/26/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/how-to-prepare-datasets-for-automl-images.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-prepare-datasets-for-automl-images.md)
> [!IMPORTANT] > Support for training computer vision models with automated ML in Azure Machine Learning is an experimental public preview feature. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning CLI extension you use:"]
-> * [v1](v1/how-to-train-with-datasets.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-read-write-data-v2.md)
In this article you learn:
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Last updated 09/06/2022
# Secure an Azure Machine Learning inferencing environment with virtual networks
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [SDK/CLI v1](v1/how-to-secure-inferencing-vnet.md?view=azureml-api-1&preserve-view=true)
-> * [SDK/CLI v2 (current version)](how-to-secure-inferencing-vnet.md)
In this article, you learn how to secure inferencing environments (online endpoints) with a virtual network in Azure Machine Learning. There are two inference options that can be secured using a VNet:
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
ms.devlang: azurecli
[!INCLUDE [SDK v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [SDK v1](./v1/how-to-secure-training-vnet.md?view=azureml-api-1&preserve-view=true)
-> * [SDK v2 (current version)](how-to-secure-training-vnet.md)
Azure Machine Learning compute instance and compute cluster can be used to securely train models in an Azure Virtual Network. When planning your environment, you can configure the compute instance/cluster with or without a public IP address. The general differences between the two are:
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
[!INCLUDE [sdk/cli v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK/CLI extension you are using:"]
-> * [v2 (current version)](how-to-secure-workspace-vnet.md)
-> * [v1](v1/how-to-secure-workspace-vnet.md?view=azureml-api-1&preserve-view=true)
In this article, you learn how to secure an Azure Machine Learning workspace and its associated resources in an Azure Virtual Network.
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](./v1/how-to-setup-authentication.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-setup-authentication.md)
Learn how to set up authentication to your Azure Machine Learning workspace from the Azure CLI or Azure Machine Learning SDK v2. Authentication to your Azure Machine Learning workspace is based on __Azure Active Directory__ (Azure AD) for most things. In general, there are four authentication workflows that you can use when connecting to the workspace:
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
# Train Keras models at scale with Azure Machine Learning [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](v1/how-to-train-keras.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-train-keras.md)
In this article, learn how to run your Keras training scripts using the Azure Machine Learning Python SDK v2.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
# Train PyTorch models at scale with Azure Machine Learning [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](v1/how-to-train-pytorch.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-train-pytorch.md)
In this article, you'll learn to train, hyperparameter tune, and deploy a [PyTorch](https://pytorch.org/) model using the Azure Machine Learning Python SDK v2.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
# Train scikit-learn models at scale with Azure Machine Learning [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](v1/how-to-train-scikit-learn.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-train-scikit-learn.md)
In this article, learn how to run your scikit-learn training scripts with Azure Machine Learning Python SDK v2.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
# Train TensorFlow models at scale with Azure Machine Learning [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](v1/how-to-train-tensorflow.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-train-tensorflow.md)
In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning Python SDK v2.
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/how-to-tune-hyperparameters.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-tune-hyperparameters.md)
Automate efficient hyperparameter tuning using Azure Machine Learning SDK v2 and CLI v2 by way of the SweepJob type.
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
# Train a small object detection model with AutoML [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/how-to-use-automl-small-object-detect.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-use-automl-small-object-detect.md)
In this article, you'll learn how to train an object detection model to detect small objects in high-resolution images with [automated ML](concept-automated-ml.md) in Azure Machine Learning.
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
ms.devlang: azurecli
# Track ML experiments and models with MLflow
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you're using:"]
-> * [v1](./v1/how-to-use-mlflow.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-use-mlflow-cli-runs.md)
__Tracking__ refers to process of saving all experiment's related information that you may find relevant for every experiment you run. Such metadata varies based on your project, but it may include:
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
# Use authentication credential secrets in Azure Machine Learning jobs [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](v1/how-to-use-secrets-in-runs.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](how-to-use-secrets-in-runs.md)
Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in clear text is insecure as it would potentially expose the secret.
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
Last updated 01/18/2022
# Hyperparameters for computer vision tasks in automated machine learning [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/reference-automl-images-hyperparameters.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](reference-automl-images-hyperparameters.md)
Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
Last updated 09/09/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/reference-automl-images-schema.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](reference-automl-images-schema.md)
Learn how to format your JSONL files for data consumption in automated ML experiments for computer vision tasks during training and inference.
machine-learning Reference Yaml Component Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-spark.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v2 (current version)](./reference-yaml-component-spark.md)
- <! The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/sparkComponent.schema.json. > [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Reference Yaml Job Parallel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-parallel.md
Last updated 09/27/2022
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/reference-pipeline-yaml.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](reference-yaml-job-pipeline.md)
> [!IMPORTANT] > Parallel job can only be used as a single step inside an Azure Machine Learning pipeline job. Thus, there is no source JSON schema for parallel job at this time. This document lists the valid keys and their values when creating a parallel job in a pipeline.
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/reference-pipeline-yaml.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](reference-yaml-job-pipeline.md)
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json.
machine-learning Reference Yaml Job Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-spark.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v2 (current version)](./reference-yaml-job-spark.md)
- <! The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/sparkJob.schema.json. > [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)]
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
# Explore Azure Machine Learning with Jupyter Notebooks [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [v1](v1/samples-notebooks.md?view=azureml-api-1&preserve-view=true)
-> * [v2](samples-notebooks.md)
The [AzureML-Examples](https://github.com/Azure/azureml-examples) repository includes the latest (v2) Azure Machine Learning Python CLI and SDK samples. For information on the various example types, see the [readme](https://github.com/Azure/azureml-examples#azure-machine-learning-examples).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/tutorial-auto-train-image-models.md?view=azureml-api-1&preserve-view=true)
-> * [v2 (current version)](tutorial-auto-train-image-models.md)
In this tutorial, you learn how to train an object detection model using Azure Machine Learning automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2.
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml.md
# Automated machine learning (AutoML)? [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](concept-automated-ml.md)
-> * [v2 (current version)](../concept-automated-ml.md?view=azureml-api-2&preserve-view=true)
Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML in Azure Machine Learning is based on a breakthrough from our [Microsoft Research division](https://www.microsoft.com/research/project/automl/).
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
[!INCLUDE [CLI v1](../../../includes/machine-learning-cli-v1.md)] [!INCLUDE [SDK v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
-> * [v1](concept-data.md)
-> * [v2 (current version)](../concept-data.md?view=azureml-api-2&preserve-view=true)
Azure Machine Learning makes it easy to connect to your data in the cloud. It provides an abstraction layer over the underlying storage service, so you can securely access and work with your data without having to write code specific to your storage type. Azure Machine Learning also provides the following data capabilities:
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-mlflow.md
[!INCLUDE [dev v1](../../../includes/machine-learning-dev-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform that you're using:"]
-> * [v1](concept-mlflow.md)
-> * [v2 (current version)](../concept-mlflow.md?view=azureml-api-2&preserve-view=true)
[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLflow's tracking URI and logging API are collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api). This component of MLflow logs and tracks your training run metrics and model artifacts, no matter where your experiment's environment is--on your computer, on a remote compute target, on a virtual machine, or in an Azure Databricks cluster.
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-model-management-and-deployment.md
Last updated 01/04/2023
[!INCLUDE [dev v1](../../../includes/machine-learning-dev-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
-> * [v1](concept-model-management-and-deployment.md)
-> * [v2 (current version)](../concept-model-management-and-deployment.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to apply Machine Learning Operations (MLOps) practices in Azure Machine Learning for the purpose of managing the lifecycle of your models. Applying MLOps practices can improve the quality and consistency of your machine learning solutions.
machine-learning Concept Network Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-network-data-access.md
Last updated 11/16/2022
# Network data access with Azure Machine Learning studio
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
-> * [v1](concept-network-data-access.md)
-> * [v2 (current version)](../how-to-administrate-data-authentication.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-train-machine-learning-model.md
ms.devlang: azurecli
# Train models with Azure Machine Learning (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [v1](concept-train-machine-learning-model.md)
-> * [v2 (current)](../concept-train-machine-learning-model.md?view=azureml-api-2&preserve-view=true)
Azure Machine Learning provides several ways to train your models, from code-first solutions using the SDK to low-code solutions such as automated machine learning and the visual designer. Use the following list to determine which training method is right for you:
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
# Connect to storage services on Azure with datastores
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
-> * [v1](how-to-access-data.md)
-> * [v2 (current version)](../how-to-datastore.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](how-to-attach-compute-targets.md)
-> * [v2 (current version)](../how-to-train-model.md?view=azureml-api-2&preserve-view=true)
Learn how to attach Azure compute resources to your Azure Machine Learning workspace with SDK v1. Then you can use these resources as training and inference [compute targets](../concept-compute-target.md) in your machine learning tasks.
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast.md
show_latex: true
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning SDK you are using:"]
-> * [v1](how-to-auto-train-forecast.md)
-> * [v2 (current version)](../how-to-auto-train-forecast.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to set up AutoML training for time-series forecasting models with Azure Machine Learning automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](how-to-auto-train-image-models.md)
-> * [v2 (current version)](../how-to-auto-train-image-models.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models.md
Last updated 03/15/2022
# Set up AutoML to train a natural language processing model with Python (preview) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"]
-> * [v1](how-to-auto-train-nlp-models.md)
-> * [v2 (current version)](../how-to-auto-train-nlp-models.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [preview disclaimer](../../../includes/machine-learning-preview-generic-disclaimer.md)]
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train.md
# Set up AutoML training with Python [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python you are using:"]
-> * [v1](how-to-configure-auto-train.md)
-> * [v2 (current version)](../how-to-configure-auto-train.md?view=azureml-api-2&preserve-view=true)
In this guide, learn how to set up an automated machine learning, AutoML, training run with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) using Azure Machine Learning automated ML. Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-private-link.md
Last updated 08/29/2022
[!INCLUDE [dev v1](../../../includes/machine-learning-dev-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [CLI or SDK v1](how-to-configure-private-link.md)
-> * [CLI v2 (current version)](../how-to-configure-private-link.md?view=azureml-api-2&preserve-view=true)
In this document, you learn how to configure a private endpoint for your Azure Machine Learning workspace. For information on creating a virtual network for Azure Machine Learning, see [Virtual network isolation and privacy overview](../how-to-network-security-overview.md).
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
Last updated 05/02/2022
[!INCLUDE [dev v1](../../../includes/machine-learning-dev-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [v1](how-to-create-attach-compute-cluster.md)
-> * [v2 (current version)](../how-to-create-attach-compute-cluster.md?view=azureml-api-2&preserve-view=true)
Learn how to create and manage a [compute cluster](../concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-kubernetes.md
Last updated 04/21/2022
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](how-to-create-attach-kubernetes.md)
-> * [v2 (current version)](../how-to-attach-kubernetes-anywhere.md?view=azureml-api-2&preserve-view=true)
> [!IMPORTANT] > This article shows how to use the CLI and SDK v1 to create or attach an Azure Kubernetes Service cluster, which is considered as **legacy** feature now. To attach Azure Kubernetes Service cluster using the recommended approach for v2, see [Introduction to Kubernetes compute target in v2](../how-to-attach-kubernetes-anywhere.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
Last updated 05/02/2022
[!INCLUDE [dev v1](../../../includes/machine-learning-dev-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [v1](how-to-create-manage-compute-instance.md)
-> * [v2 (current version)](../how-to-create-manage-compute-instance.md?view=azureml-api-2&preserve-view=true)
Learn how to create and manage a [compute instance](../concept-compute-instance.md) in your Azure Machine Learning workspace with CLI v1.
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
Last updated 09/28/2022
# Create Azure Machine Learning datasets
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](how-to-create-register-datasets.md)
-> * [v2 (current version)](../how-to-create-data-assets.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-mlflow-models.md
# Deploy MLflow models as Azure web services [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform you are using:"]
-> * [v1](how-to-deploy-mlflow-models.md)
-> * [v2 (current version)](../how-to-deploy-mlflow-models-online-endpoints.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model as an Azure web service, so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models. See [MLflow and Azure Machine Learning](concept-mlflow.md) for additional MLflow and Azure Machine Learning functionality integrations.
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](how-to-inference-onnx-automl-image-models.md)
-> * [v2 (current version)](../how-to-inference-onnx-automl-image-models.md?view=azureml-api-2&preserve-view=true)
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"]
-> * [v1](how-to-log-view-metrics.md)
-> * [v2](../how-to-log-view-metrics.md?view=azureml-api-2&preserve-view=true)
Log real-time information using both the default Python logging package and Azure Machine Learning Python SDK-specific functionality. You can log locally and send logs to your workspace in the portal.
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](how-to-manage-workspace-cli.md)
-> * [v2 (current version)](../how-to-manage-workspace-cli.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
# Manage Azure Machine Learning workspaces with the Python SDK (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](how-to-manage-workspace.md)
-> * [v2 (current version)](../how-to-manage-workspace.md?view=azureml-api-2&preserve-view=true)
In this article, you create, view, and delete [**Azure Machine Learning workspaces**](../concept-workspace.md) for [Azure Machine Learning](../overview-what-is-azure-machine-learning.md), using the [SDK for Python](/python/api/overview/azure/ml/).
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images.md
Last updated 10/13/2021
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](how-to-prepare-datasets-for-automl-images.md)
-> * [v2 (current version)](../how-to-prepare-datasets-for-automl-images.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-inferencing-vnet.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [SDK/CLI v1](how-to-secure-inferencing-vnet.md)
-> * [SDK/CLI v2 (current version)](../how-to-secure-inferencing-vnet.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to secure inferencing environments with a virtual network in Azure Machine Learning. This article is specific to the SDK/CLI v1 deployment workflow of deploying a model as a web service.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md
[!INCLUDE [SDK v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [SDK v1](how-to-secure-training-vnet.md)
-> * [SDK v2 (current version)](../how-to-secure-training-vnet.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to secure training environments with a virtual network in Azure Machine Learning using the Python SDK v1.
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-workspace-vnet.md
[!INCLUDE [sdk/cli v1](../../../includes/machine-learning-dev-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK/CLI extension you are using:"]
-> * [v1](how-to-secure-workspace-vnet.md)
-> * [v2 (current version)](../how-to-secure-workspace-vnet.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to secure an Azure Machine Learning workspace and its associated resources in an Azure Virtual Network.
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](how-to-setup-authentication.md)
-> * [v2 (current version)](../how-to-setup-authentication.md?view=azureml-api-2&preserve-view=true)
Learn how to set up authentication to your Azure Machine Learning workspace. Authentication to your Azure Machine Learning workspace is based on __Azure Active Directory__ (Azure AD) for most things. In general, there are four authentication workflows that you can use when connecting to the workspace:
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-keras.md
# Train Keras models at scale with Azure Machine Learning (SDK v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](how-to-train-keras.md)
-> * [v2 (current version)](../how-to-train-keras.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to run your Keras training scripts with Azure Machine Learning.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
# Train PyTorch models at scale with Azure Machine Learning SDK (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](how-to-train-pytorch.md)
-> * [v2 (current version)](../how-to-train-pytorch.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-scikit-learn.md
# Train scikit-learn models at scale with Azure Machine Learning (SDK v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](how-to-train-scikit-learn.md)
-> * [v2 (current version)](../how-to-train-scikit-learn.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to run your scikit-learn training scripts with Azure Machine Learning.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-tensorflow.md
# Train TensorFlow models at scale with Azure Machine Learning SDK (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](how-to-train-tensorflow.md)
-> * [v2 (current version)](../how-to-train-tensorflow.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning.
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-datasets.md
# Train models with Azure Machine Learning datasets
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](how-to-train-with-datasets.md)
-> * [v2 (current version)](../how-to-read-write-data-v2.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-tune-hyperparameters.md
# Hyperparameter tuning a model with Azure Machine Learning (v1) [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](how-to-tune-hyperparameters.md)
-> * [v2 (current version)](../how-to-tune-hyperparameters.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automl-small-object-detect.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](how-to-use-automl-small-object-detect.md)
-> * [v2 (current version)](../how-to-use-automl-small-object-detect.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-environments.md
ms.devlang: azurecli
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](how-to-use-environments.md)
-> * [v2 (current version)](../how-to-manage-environments-v2.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to create and manage Azure Machine Learning [environments](/python/api/azureml-core/azureml.core.environment.environment) using CLI v1. Use the environments to track and reproduce your projects' software dependencies as they evolve. The [Azure Machine Learning CLI](reference-azure-machine-learning-cli.md) v1 mirrors most of the functionality of the Python SDK v1. You can use it to create and manage environments.
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-managed-identities.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](how-to-use-managed-identities.md)
-> * [v2 (current version)](../how-to-identity-based-service-authentication.md?view=azureml-api-2&preserve-view=true)
[Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) allow you to configure your workspace with the *minimum required permissions to access resources*.
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-mlflow.md
# Track ML models with MLflow and Azure Machine Learning [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](how-to-use-mlflow.md)
-> * [v2 (current version)](../how-to-use-mlflow-cli-runs.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to enable [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) to connect Azure Machine Learning as the backend of your MLflow experiments.
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-secrets-in-runs.md
# Use authentication credential secrets in Azure Machine Learning training jobs [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](how-to-use-secrets-in-runs.md)
-> * [v2 (current version)](../how-to-use-secrets-in-runs.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to use secrets in training jobs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
[!INCLUDE [dev v1](../../../includes/machine-learning-dev-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension or Python SDK you are using:"]
-> * [v1](introduction.md)
-> * [v2 (current version)](../index.yml?view=azureml-api-2&preserve-view=true)
All articles in this section document the use of the first version of Azure Machine Learning Python SDK (v1) or Azure CLI ml extension (v1).
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-hyperparameters.md
Last updated 01/18/2022
# Hyperparameters for computer vision tasks in automated machine learning (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](reference-automl-images-hyperparameters.md)
-> * [v2 (current version)](../reference-automl-images-hyperparameters.md?view=azureml-api-2&preserve-view=true)
Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-schema.md
Last updated 10/13/2021
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](reference-automl-images-schema.md)
-> * [v2 (current version)](../reference-automl-images-schema.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-azure-machine-learning-cli.md
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](reference-azure-machine-learning-cli.md)
-> * [v2 (current version)](../how-to-configure-cli.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-pipeline-yaml.md
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](reference-pipeline-yaml.md)
-> * [v2 (current version)](../reference-yaml-job-pipeline.md?view=azureml-api-2&preserve-view=true)
> [!NOTE] > The YAML syntax detailed in this document is based on the JSON schema for the v1 version of the ML CLI extension. This syntax is guaranteed only to work with the ML CLI v1 extension.
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-notebooks.md
# Explore Azure Machine Learning with Jupyter Notebooks (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [v1](<samples-notebooks.md>)
-> * [v2](../samples-notebooks.md?view=azureml-api-2&preserve-view=true)
The [Azure Machine Learning Notebooks repository](https://github.com/azure/machinelearningnotebooks) includes Azure Machine Learning Python SDK (v1) samples. These Jupyter notebooks are designed to help you explore the SDK and serve as models for your own machine learning projects. In this repository, you'll find tutorial notebooks in the **tutorials** folder and feature-specific notebooks in the **how-to-use-azureml** folder.
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-bring-data.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](tutorial-1st-experiment-bring-data.md)
-> * [v2](../tutorial-1st-experiment-bring-data.md?view=azureml-api-2&preserve-view=true)
This tutorial shows you how to upload and use your own data to train machine learning models in Azure Machine Learning. This tutorial is *part 3 of a three-part tutorial series*.
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-hello-world.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](tutorial-1st-experiment-hello-world.md)
-> * [v2](../tutorial-1st-experiment-hello-world.md?view=azureml-api-2&preserve-view=true)
In this tutorial, you run your first Python script in the cloud with Azure Machine Learning. This tutorial is *part 1 of a three-part tutorial series*.
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-sdk-train.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](tutorial-1st-experiment-sdk-train.md)
-> * [v2](../tutorial-1st-experiment-sdk-train.md?view=azureml-api-2&preserve-view=true)
This tutorial shows you how to train a machine learning model in Azure Machine Learning. This tutorial is *part 2 of a three-part tutorial series*.
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models.md
# Tutorial: Train an object detection model (preview) with AutoML and Python (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](tutorial-auto-train-image-models.md)
-> * [v2 (current version)](../tutorial-auto-train-image-models.md?view=azureml-api-2&preserve-view=true)
>[!IMPORTANT]
migrate How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-migrate.md
To migrate large amounts of data to Azure, you can order an Azure Data Box for
1. In **Overview**, select **Explore more**. 2. In **Explore more**, select **Data box**. 3. In **Get started with Data Box**, select the subscription and resource group you want to use when ordering a Data Box.
-4. The **Transfer type** is an import to Azure. Specify the country in which the data resides, and the Azure region to which you want to transfer the data.
+4. The **Transfer type** is an import to Azure. Specify the country/region in which the data resides, and the Azure region to which you want to transfer the data.
5. Click **Apply** to save the settings. ## Next steps
mysql Tutorial Add Mysql Connection In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-add-mysql-connection-in-key-vault.md
+
+ Title: "Tutorial: Manage MySQL credentials in Azure Key Vault"
+description: "This tutorial shows how to store and get an Azure Database for MySQL Flexible Server connection string in Azure Key Vault"
+++++ Last updated : 06/08/2023++
+# Tutorial: Manage MySQL credentials in Azure Key Vault
+You can store the MySQL connection string in Azure Key Vault to ensure that sensitive information is securely managed and accessed only by authorized users or applications. Additionally, any changes to the connection string can be easily updated in the Key Vault without modifying the application code.
+
+## Prerequisites
+
+- You need an Azure subscription. If you don't already have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- All access to secrets takes place through Azure Key Vault. For this quickstart, create a key vault using [Azure portal](../../key-vault/general/quick-create-portal.md), [Azure CLI](../../key-vault/general/quick-create-cli.md), or [Azure PowerShell](../../key-vault/general/quick-create-powershell.md). Make sure you have the necessary permissions to manage and access the Key Vault.
+- Install .NET or Java or PHP or Python based on the framework you are using for your application.
+
+## Add a secret to Key Vault
+
+To add a secret to the vault, follow the steps:
+
+1. Navigate to your new key vault in the Azure portal
+1. On the Key Vault settings pages, select **Secrets**.
+1. Select on **Generate/Import**.
+1. On the **Create a secret** page, provide the following information:
+ - **Upload options**: Manual.
+ - **Name**: Type a name for the secret. The secret name must be unique within a Key Vault. The name must be a 1-127 character string, starting with a letter and containing only 0-9, a-z, A-Z, and -. For more information on naming, see [Key Vault objects, identifiers, and versioning](../../key-vault/general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning)
+ - **Value**: Type a value for the secret. Key Vault APIs accept and return secret values as strings.
+ - Leave the other values to their defaults. Select **Create**.
+
+Once that you receive the message that the secret has been successfully created, you may select on it on the list.
+
+For more information, see [About Azure Key Vault secrets](../../key-vault/secrets/secrets-best-practices.md)
+
+## Configure access policies
+In the Key Vault settings, configure the appropriate access policies to grant access to the users or applications that need to retrieve the MySQL connection string from the Key Vault. Ensure that the necessary permissions are granted for "Get" operations on secrets.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to the Key Vault resource.
+1. Select **Access policies**, then select **Create**.
+1. Select the permissions you want under **Key permissions**, **Secret permissions**, and **Certificate permissions**.
+1. Under the **Principal** selection pane, enter the name of the user, app or service principal in the search field and select the appropriate result. If you're using a managed identity for the app, search for and select the name of the app itself.
+1. Review the access policy changes and select **Create** to save the access policy.
+1. Back on the **Access policies** page, verify that your access policy is listed.
+
+## Retrieve the MySQL connection string
+In your application or script, use the Azure Key Vault SDK or client libraries to authenticate and retrieve the MySQL connection string from the Key Vault. You need to provide the appropriate authentication credentials and access permissions to access the Key Vault. Once you have retrieved the MySQL connection string from Azure Key Vault, you can use it in your application to establish a connection to the MySQL database. Pass the retrieved connection string as a parameter to your database connection code.
+
+### Code samples to retrieve connection string
+Here are few code samples to retrieve the connection string from the key vault secret.
+
+### [.NET](#tab/dotnet)
+In this code, we are using [Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net). We define the URI of our Key Vault and the name of the secret (connection string) we want to retrieve. We then create a new DefaultAzureCredential object, which represents the authentication information for our application to access the Key Vault.
+
+```net
+using System;
+using System.Threading.Tasks;
+using Azure.Identity;
+using Azure.Security.KeyVault.Secrets;
+
+namespace KeyVaultDemo
+{
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ var kvUri = "https://my-key-vault.vault.azure.net/";
+ var secretName = "my-db-conn-string";
+
+ var credential = new DefaultAzureCredential();
+ var client = new SecretClient(new Uri(kvUri), credential);
+
+ var secret = await client.GetSecretAsync(secretName);
+ var connString = secret.Value;
+
+ Console.WriteLine($"Connection string retrieved: {connString}");
+ }
+ }
+}
+```
+
+### [Java](#tab/java)
+In this Java code, we use the [Azure SDK for Java](https://github.com/Azure/azure-sdk-for-java) to interact with Azure Key Vault. We first define the Key Vault URL and the name of the secret (connection string) we want to retrieve. Then, we create a SecretClient object using the SecretClientBuilder class. We set the Key Vault URL and provide the DefaultAzureCredential to authenticate with Azure AD. The DefaultAzureCredential automatically authenticates using the available credentials, such as environment variables, managed identities, or Visual Studio Code authentication.
+
+Next, we use the _getSecret_ method on the **SecretClient** to retrieve the secret. The method returns a **KeyVaultSecret** object, from which we can obtain the secret value using the _getValue_ method. Finally, we print the retrieved connection string to the console. Make sure to replace the _keyVaultUrl_ and _secretName_ variables with your own Key Vault URL and secret name. Next, we create a new **SecretClient** object and pass in the Key Vault URI and the credential object. We can then call the GetSecretAsync method on the client object, passing in the name of the secret we want to retrieve.
+
+```java
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.security.keyvault.secrets.SecretClient;
+import com.azure.security.keyvault.secrets.SecretClientBuilder;
+import com.azure.security.keyvault.secrets.models.KeyVaultSecret;
+
+public class KeyVaultDemo {
+
+ public static void main(String[] args) {
+ String keyVaultUrl = "https://my-key-vault.vault.azure.net/";
+ String secretName = "my-db-conn-string";
+
+ SecretClient secretClient = new SecretClientBuilder()
+ .vaultUrl(keyVaultUrl)
+ .credential(new DefaultAzureCredentialBuilder().build())
+ .buildClient();
+
+ KeyVaultSecret secret = secretClient.getSecret(secretName);
+ String connString = secret.getValue();
+
+ System.out.println("Connection string retrieved: " + connString);
+ }
+}
+```
+
+### [PHP](#tab/php)
+In this PHP code, we first require the necessary autoload file and import the required classes from the [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php). We define the _$keyVaultUrl_ variable with the URL of your Azure Key Vault and _$secretName_ variable with the name of the secret (connection string) you want to retrieve. Next, we create a **DefaultAzureCredential** object to authenticate with Azure AD, which automatically picks up the available credentials from your environment.
+
+We then create a **SecretClient** object, passing the Key Vault URL and the credential object to authenticate with the Key Vault. The _getSecret_ method on the **SecretClient** can retrieve the secret by passing the _$secretName_. The method returns a KeyVaultSecret object, from which we can obtain the secret value using the getValue method. Finally, we print the retrieved connection string to the console. Make sure to have the necessary Azure SDK packages installed and the autoload file included properly in your PHP project.
+
+```php
+require_once 'vendor/autoload.php';
+
+use Azure\Identity\DefaultAzureCredential;
+use Azure\Security\KeyVault\Secrets\SecretClient;
+
+$keyVaultUrl = 'https://my-key-vault.vault.azure.net/';
+$secretName = 'my-db-conn-string';
+
+$credential = new DefaultAzureCredential();
+$client = new SecretClient($keyVaultUrl, $credential);
+
+$secret = $client->getSecret($secretName);
+$connString = $secret->getValue();
+
+echo 'Connection string retrieved: ' . $connString;
+```
+
+### [Python](#tab/python)
+In this Python code, we first import the necessary modules from the [Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python). We define the _key_vault_url_ variable with the URL of your Azure Key Vault and _secret_name_ variable with the name of the secret (connection string) you want to retrieve. Next, we create a **DefaultAzureCredential** object to authenticate with Azure AD. The **DefaultAzureCredential** automatically authenticates using the available credentials, such as environment variables, managed identities, or Visual Studio Code authentication.
+
+Then, we create a **SecretClient** object, passing the Key Vault URL and the credential object to authenticate with the Key Vault. The _get_secret_ method on the **SecretClient** can retrieve the secret by passing the secret_name. The method returns a **KeyVaultSecret** object, from which we can obtain the secret value using the value property. Finally, we print the retrieved connection string to the console. Make sure to replace the _key_vault_url_ and _secret_name_ variables with your own Key Vault URL and secret name.
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.keyvault.secrets import SecretClient
+
+key_vault_url = "https://my-key-vault.vault.azure.net/"
+secret_name = "my-db-conn-string"
+
+credential = DefaultAzureCredential()
+secret_client = SecretClient(vault_url=key_vault_url, credential=credential)
+
+secret = secret_client.get_secret(secret_name)
+conn_string = secret.value
+
+print("Connection string retrieved:", conn_string)
+```
+--
+
+## Next steps
+[Azure Key Vault client libraries](../../key-vault/general/client-libraries.md)
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for MyS
|||--|--| | Australia Central | 20.36.105.32 | 20.36.105.0 | | | Australia Central2 | 20.36.113.0, 20.36.113.32 | | |
-| Australia East | 13.75.149.87, 40.79.161.1 | | |
-| Australia South East | 13.73.109.251, 13.77.49.32, 13.77.48.10 | | |
+| Australia East | 40.79.161.1, 13.70.112.32 | 13.75.149.87 | |
+| Australia South East | 13.77.49.32, 13.77.48.10, 13.77.49.33 | 13.73.109.251 | |
| Brazil South | 191.233.201.8, 191.233.200.16 | | 104.41.11.5 | | Canada Central | 13.71.168.32|| 40.85.224.249, 52.228.35.221 |
-| Canada East | 40.86.226.166, 40.69.105.32 | 52.242.30.154 | |
+| Canada East | 40.69.105.32, 40.69.105.32 | 52.242.30.154, 40.86.226.166 | |
| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | | | China East | 52.130.112.139 | 139.219.130.35 | | | China East 2 | 40.73.82.1, 52.130.120.89 |
The following table lists the gateway IP addresses of the Azure Database for MyS
| UAE North | 65.52.248.0 | | | | UK South | 51.140.144.32, 51.105.64.0 | 51.140.184.11 | | | UK West | 51.140.208.98 | 51.141.8.11 | |
-| West Central US | 13.78.145.25, 52.161.100.158 | | |
+| West Central US | 13.71.193.34 | 13.78.145.25, 52.161.100.158 | |
| West Europe | 13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 | | West US | 13.86.216.212, 13.86.217.212 | 104.42.238.205 | 23.99.34.75 | | West US2 | 13.66.136.195, 13.66.136.192, 13.66.226.202 | | |
open-datasets Overview What Are Open Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/overview-what-are-open-datasets.md
Following are examples of datasets available.
|Dataset | Notebooks | Description | |-|||
-|[Public Holidays](https://azure.microsoft.com/services/open-datasets/catalog/public-holidays/) | [Azure Notebooks](https://azure.microsoft.com/services/open-datasets/catalog/public-holidays/?tab=data-access#AzureNotebooks) <br> [Azure Databricks](https://azure.microsoft.com/services/open-datasets/catalog/public-holidays/?tab=data-access#AzureDatabricks) | Worldwide public holiday data, covering 41 countries or regions from 1970 to 2099. Includes country and whether most people have paid time off. |
+|[Public Holidays](https://azure.microsoft.com/services/open-datasets/catalog/public-holidays/) | [Azure Notebooks](https://azure.microsoft.com/services/open-datasets/catalog/public-holidays/?tab=data-access#AzureNotebooks) <br> [Azure Databricks](https://azure.microsoft.com/services/open-datasets/catalog/public-holidays/?tab=data-access#AzureDatabricks) | Worldwide public holiday data, covering 41 countries or regions from 1970 to 2099. Includes country/region and whether most people have paid time off. |
## Access to datasets With an Azure account, you can access open datasets using code or through the Azure service interface. The data is colocated with Azure cloud compute resources for use in your machine learning solution.
postgresql Concepts Intelligent Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-intelligent-tuning.md
Title: Intelligent tuning - Azure Database for PostgreSQL - Flexible Server description: This article describes the intelligent tuning feature in Azure Database for PostgreSQL - Flexible Server.--++ Previously updated : 11/30/2021 Last updated : 06/02/2023 # Perform intelligent tuning in Azure Database for PostgreSQL - Flexible Server [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-**Applies to:** Azure Database for PostgreSQL - Flexible Server versions 11 and later.
+The intelligent tuning feature of Azure Database for PostgreSQL - Flexible Server is designed to enhance overall
+performance automatically and help prevent possible issues. It continuously monitors the database instance's overall
+status and performance and automatically optimizes the workload's performance.
-The intelligent tuning feature in Azure Database for PostgreSQL - Flexible Server provides a way to automatically improve your database's performance. Intelligent tuning automatically adjusts your `checkpoint_completion_target`, `min_wal_size`, and `bgwriter_delay` parameters based on usage patterns and values. It queries statistics for your database every 30 minutes and makes constant adjustments to optimize performance without any interaction.
+The Azure Database for PostgreSQL - Flexible Server is equipped with an inherent intelligence mechanism that can
+dynamically adapt the database to your workload, thereby automatically enhancing performance. This feature comprises two
+automatic tuning functionalities:
-Intelligent tuning is an opt-in feature, so it isn't active by default on a server. It's available for singular databases and isn't global. Enabling it on one database doesn't enable it on all connected databases.
+* **Autovacuum tuning**: This function diligently tracks the bloat ratio and adjusts autovacuum settings accordingly. It
+ factors in both current and predicted resource usage to ensure your workload isn't disrupted.
+* **Writes tuning**: This feature persistently monitors the volume and patterns of write operations, modifying
+ parameters that affect write performance. These parameters
+ include `bgwriter_delay`, `checkpoint_completion_target`, `max_wal_size`, and `min_wal_size`. The primary aim of these
+ adjustments is to enhance both system performance and reliability, thereby proactively averting potential
+ complications.
-## Enable intelligent tuning by using the Azure portal
+Learn how to enable intelligent tuning using [Azure portal](how-to-enable-intelligent-performance-portal.md) or [CLI](how-to-enable-intelligent-performance-cli.md).
-1. Sign in to the Azure portal and select your Azure Database for PostgreSQL server.
-2. In the **Settings** section of the menu, select **Server Parameters**.
-3. Search for the intelligent tuning parameter.
-4. Set the value to **True**, and the select **Save**.
+## Why intelligent tuning?
-Allow up to 35 minutes for the first batch of data to persist in the *azure_sys* database.
+The autovacuum process is a critical part of maintaining the health and performance of a PostgreSQL database. It helps
+to reclaim storage occupied by "dead" rows, freeing up space and ensuring the database continues to run smoothly.
+Equally important is the tuning of write operations within the database, a task that typically falls to database
+administrators (DBAs).
-## Information about intelligent tuning
+However, constantly monitoring a database and fine-tuning write operations can be challenging and time-consuming. This
+becomes increasingly complex when dealing with multiple databases, and might even become an impossible task when
+managing a large number of them.
-Intelligent tuning operates around three main parameters for the given time: `checkpoint_completion_target`, `max_wal_size`, and `bgwriter_delay`.
+This is where intelligent tuning steps in. Rather than manually overseeing and tuning your database, the intelligent
+tuning feature can effectively shoulder some of the load. It helps in the automatic monitoring and tuning of the
+database, allowing you to focus on other important tasks.
-These three parameters mostly affect:
+Intelligent tuning provides an autovacuum tuning feature that vigilantly monitors the bloat ratio, adjusting settings as needed to ensure optimal resource utilization. It proactively manages the "cleaning" process of the database, mitigating performance issues caused by outdated data.
-* The duration of checkpoints.
-* The frequency of checkpoints.
-* The duration of synchronizations.
+In addition, the Writes Tuning aspect of intelligent tuning observes the quantity and transactional patterns of write operations. It intelligently adjusts parameters such as `bgwriter_delay`, `checkpoint_completion_target`, `max_wal_size`, and `min_wal_size`. By doing so, it effectively enhances system performance and reliability, ensuring smooth and efficient operation even under high write loads.
+
+In summary, intelligent tuning provides an efficient solution for database monitoring and tuning, taking the hard and
+tedious tasks off your plate. By using this automatic tuning feature, you can rely on the Azure Database for
+PostgreSQL - Flexible Server to maintain the optimal performance of your databases, saving you valuable time and
+resources.
+
+### How does intelligent tuning work?
+
+Intelligent Tuning is an ongoing monitoring and analysis process that not only learns about the characteristics of your
+workload but also tracks your current load and resource usage such as CPU or IOPS. By doing so, it makes sure not to
+disturb the normal operations of your application workload.
+
+The process allows the database to dynamically adjust to your workload by discerning the current bloat ratio, write
+performance, and checkpoint efficiency on your instance. Armed with these insights, intelligent tuning deploys tuning
+actions designed to not only enhance your workload's performance but also to circumvent potential pitfalls.
+
+## Autovacuum tuning
+
+Intelligent tuning adjusts five significant parameters related to
+autovacuum: `autovacuum_vacuum_scale_factor`, `autovacuum_cost_limit`, `autovacuum_naptime`, `autovacuum_vacuum_threshold`,
+and `autovacuum_vacuum_cost_delay`. These parameters regulate components such as the fraction of the table that sets off
+a VACUUM process, the cost-based vacuum delay limit, the pause interval between autovacuum runs, the minimum count of
+updated or dead tuples needed to start a VACUUM, and the pause duration between cleanup rounds.
+
+> [!IMPORTANT]
+> Autovacuum tuning is currently supported for the General Purpose and Memory Optimized server compute tiers that have
+> four or more vCores, Burstable server compute tier is not supported.
+
+> [!IMPORTANT]
+> It's important to keep in mind that intelligent tuning modifies autovacuum-related parameters at the server level, not
+> at individual table levels. Also, if autovacuum is turned off, intelligent tuning cannot operate correctly. For
+> intelligent tuning to optimize the process, the autovacuum feature must be enabled.
+
+While the autovacuum daemon triggers two operations - VACUUM and ANALYZE, intelligent tuning only fine-tunes the VACUUM
+process. The ANALYZE process, which gathers statistics on table contents to help the PostgreSQL query planner choose the
+most suitable query execution plan, is currently not adjusted by this feature.
+
+One key feature of intelligent tuning is that it includes safeguards to measure resource utilization like CPU and IOPS.
+This means that it will not ramp up autovacuum activity when your instance is under heavy load. This way, intelligent
+tuning ensures a balance between effective cleanup operations and the overall performance of your system.
+
+When optimizing autovacuum, intelligent tuning considers the server's average bloat, using statistics about live and
+dead tuples. To lessen bloat, intelligent tuning might reduce parameters like the scale factor or naptime, triggering
+the VACUUM process sooner and, if necessary, decreasing the delay between rounds.
+
+On the other hand, if the bloat is minimal and the autovacuum process is too aggressive, then parameters such as delay,
+scale factor, and naptime may be increased. This balance ensures minimal bloat and the efficient use of the resources by
+the autovacuum process.
++
+## Writes tuning
+
+Intelligent tuning adjusts four parameters related to writes
+tuning:`bgwriter_delay`, `checkpoint_completion_target`, `max_wal_size`, and `min_wal_size`. The behavior and benefits of adjusting some of these are described below.
+
+The `bgwriter_delay` parameter determines the frequency at which the background writer process is awakened to clean "dirty" buffers (those buffers that are new or modified). The background writer process is one of three processes in PostgreSQL
+that handle write operations, the other two being the checkpointer process and backends (standard client processes, such
+as application connections). The background writer process's primary role is to alleviate the load from the main
+checkpointer process and decrease the strain of backend writes. By adjusting the `bgwriter_delay` parameter, which governs the frequency of background writer rounds, we can also optimize the performance of DML queries.
+
+The `checkpoint_completion_target` parameter is part of the second write mechanism supported by PostgreSQL, specifically
+the checkpointer process. Checkpoints occur at constant intervals defined by `checkpoint_timeout` (unless forced by
+exceeding the configured space). To avoid overloading the I/O system with a surge of page writes, writing dirty buffers
+during a checkpoint is spread out over a period of time. This duration is controlled by
+the `checkpoint_completion_target`, specified as a fraction of the checkpoint interval, which is set
+using `checkpoint_timeout`.
+While the default value of `checkpoint_completion_target` is 0.9 (since PostgreSQL 14), which generally works best as it
+spreads the I/O load over the maximum time period, there might be rare instances where, due to unexpected fluctuations
+in the number of WAL segments needed, checkpoints may not complete in time. Hence, due to its potential impact on
+performance, `checkpoint_completion_target` has been chosen as a target metric for intelligent tuning.
-Intelligent tuning operates in both directions. It tries to lower durations during high workloads and increase durations during idle segments. In this way, you can get personalized results during difficult time periods without manual updates.
## Limitations and known issues * Intelligent tuning makes optimizations only in specific ranges. It's possible that the feature won't make any changes.
-* Deleted databases in the query can cause slight delays in the feature's execution of improvements.
-* At this time, the feature makes optimizations only in the storage sections.
+* ANALYZE settings are not adjusted by intelligent tuning.
+
+## Next steps
+
+* [Configure intelligent performance for Azure Database for PostgreSQL - Flexible Server using Azure portal](how-to-enable-intelligent-performance-portal.md)
+* [Configure intelligent performance for Azure Database for PostgreSQL - Flexible Server using Azure CLI](how-to-enable-intelligent-performance-cli.md)
+* [Troubleshooting guides for Azure Database for PostgreSQL - Flexible Server](concepts-troubleshooting-guides.md)
+* [Autovacuum Tuning in Azure Database for PostgreSQL - Flexible Server](how-to-autovacuum-tuning.md)
+* [Troubleshoot high IOPS utilization for Azure Database for PostgreSQL - Flexible Server](how-to-high-io-utilization.md)
+* [Best practices for uploading data in bulk in Azure Database for PostgreSQL - Flexible Server](how-to-bulk-load-data.md)
+* [Troubleshoot high CPU utilization in Azure Database for PostgreSQL - Flexible Server](how-to-high-cpu-utilization.md)
+* [Query Performance Insight for Azure Database for PostgreSQL - Flexible Server](concepts-query-performance-insight.md)
postgresql How To Enable Intelligent Performance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-enable-intelligent-performance-cli.md
+
+ Title: Configure intelligent performance - Azure Database for PostgreSQL - Flexible Server
+description: This article describes how to configure intelligent performance in Azure Database for PostgreSQL - Flexible Server using the Azure CLI.
++++
+ms.devlang: azurecli
+ Last updated : 06/02/2023+++
+# Configure intelligent performance for Azure Database for PostgreSQL - Flexible Server using Azure CLI
++
+You can verify and update intelligent performance configuration for an Azure PostgreSQL server using the Command Line Interface (Azure CLI).
+
+To learn more about intelligent tuning, see the [overview](concepts-intelligent-tuning.md).
+
+## Prerequisites
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+- Log in to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+ ```azurecli
+ az login
+ ````
+
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+
+ ```azurecli-interactive
+ az account set --subscription <subscription id>
+ ```
+
+- Create a PostgreQL Flexible Server if you haven't already created one using the ```az postgres flexible-server create``` command.
+
+ ```azurecli-interactive
+ az postgres flexible-server create --resource-group myresourcegroup --name myservername
+ ```
+
+## Verify current settings
+
+Use the [az postgres flexible-server parameter show](/cli/azure/postgres/flexible-server/parameter#az-postgres-flexible-server-parameter-show) to confirm the current settings of the intelligent performance feature.
+
+You can verify if this feature is activated for the server **mydemoserver.postgres.database.azure.com** under the resource group **myresourcegroup** by using the command below.
+
+```azurecli-interactive
+az postgres flexible-server parameter show --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning --query value
+```
+
+Also, you can inspect the current setting of the **intelligent_tuning.metric_targets** server parameter using:
+
+```azurecli-interactive
+az postgres flexible-server parameter show --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning.metric_targets --query value
+```
+
+## Enable intelligent tuning
+
+For enabling or disabling the intelligent tuning, and choosing among the following tuning targets: `none`, `Storage-checkpoint_completion_target`, `Storage-min_wal_size`,`Storage-max_wal_size`, `Storage-bgwriter_delay`, `tuning-autovacuum`, `all`, you should use the [az postgres flexible-server parameter set](/cli/azure/postgres/flexible-server/parameter#az-postgres-flexible-server-parameter-set) command.
+
+> [!IMPORTANT]
+> Autovacuum tuning is currently supported for the General Purpose and Memory Optimized server compute tiers that have four or more vCores, Burstable server compute tier is not supported.
+
+To begin with, activate the intelligent tuning feature with the following command.
+
+```azurecli-interactive
+az postgres flexible-server parameter set --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning --value ON
+```
+
+Next, select the tuning targets you wish to activate.
+For activating all tuning targets, use:
+
+```azurecli-interactive
+az postgres flexible-server parameter set --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning.metric_targets --value all
+```
+
+For enabling autovacuum tuning only:
+
+```azurecli-interactive
+az postgres flexible-server parameter set --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning.metric_targets --value tuning-autovacuum
+```
+
+For activating two tuning targets:
+
+```azurecli-interactive
+az postgres flexible-server parameter set --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning.metric_targets --value tuning-autovacuum,Storage-bgwriter_delay
+```
++
+In case you wish to reset a parameter's value to default, simply exclude the optional `--value` parameter. The service then applies the default value. In the above example, it would look like the following and would set `intelligent_tuning.metric_targets` to `none`:
+
+```azurecli-interactive
+az postgres flexible-server parameter set --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning.metric_targets
+```
+
+> [!NOTE]
+> Both `intelligent_tuning` and `intelligent_tuning.metric_targets` server parameters are dynamic, meaning no server restart is required when their values are changed.
+
+### Considerations for selecting `intelligent_tuning.metric_targets` values
+
+When choosing values from the `intelligent_tuning.metric_targets` server parameter take the following considerations into account:
+
+* The `NONE` value takes precedence over all other values. If `NONE` is chosen alongside any combination of other values, the parameter will be perceived as set to `NONE`. This is equivalent to `intelligent_tuning = OFF`, implying that no tuning will occur.
+
+* The `ALL` value takes precedence over all other values, with the exception of `NONE` as detailed above. If `ALL` is chosen with any combination, barring `NONE`, all the listed parameters will undergo tuning.
+> [!NOTE]
+> The `ALL` value encompasses all existing metric targets. Moreover, this value will also automatically apply to any new metric targets that might be added in the future. This allows for comprehensive and future-proof tuning of your PostgreSQL server.
+
+* If you wish to include an additional tuning target, you will need to specify both the existing and new tuning targets. For example, if `bgwriter_delay` is already enabled, and you want to add autovacuum tuning, your command would look like this:
+```azurecli-interactive
+az postgres flexible-server parameter set --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning.metric_targets --value tuning-autovacuum,Storage-bgwriter_delay
+```
+Please note that specifying only a new value would overwrite the current settings. So, when adding a new tuning target, always ensure that you include the existing tuning targets in your command.
++
+## Next steps
+
+- [Perform intelligent tuning in Azure Database for PostgreSQL - Flexible Server
+](concepts-intelligent-tuning.md)
postgresql How To Enable Intelligent Performance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-enable-intelligent-performance-portal.md
+
+ Title: Configure intelligent performance - Azure Database for PostgreSQL - Flexible Server - Portal
+description: This article describes how to configure intelligent performance in Azure Database for PostgreSQL Flexible Server through the Azure portal.
+++++ Last updated : 06/05/2023++
+# Configure intelligent performance for Azure Database for PostgreSQL - Flexible Server using Azure portal
++
+This article provides a step-by-step procedure to configure intelligent performance in Azure Database for PostgreSQL - Flexible Server using Azure portal.
+
+To learn more about intelligent tuning, see the [overview](concepts-intelligent-tuning.md).
+
+> [!IMPORTANT]
+> Autovacuum tuning is currently supported for the General Purpose and Memory Optimized server compute tiers that have four or more vCores, Burstable server compute tier is not supported.
+
+## Steps to enable intelligent tuning on your Flexible Server
+
+1. Visit the [Azure portal](https://portal.azure.com/) and select the flexible server on which you want to enable intelligent tuning.
+
+2. In the left pane, select **Server Parameters** and then search for **intelligent tuning**.
+
+ :::image type="content" source="media/how-to-intelligent-tuning-portal/enable-intelligent-tuning.png" alt-text="Screenshot of Server Parameter blade with search for intelligent tuning.":::
+
+3. You'll notice two parameters: `intelligent_tuning` and `intelligent_tuning.metric_targets`. To activate intelligent tuning, change `intelligent_tuning` to `ON`. You have the option to select one, multiple, or all available tuning targets in the `intelligent_tuning.metric_targets`. Click the `Save` button to apply these changes.
++
+> [!NOTE]
+> Both `intelligent_tuning` and `intelligent_tuning.metric_targets` server parameters are dynamic, meaning no server restart is required when their values are changed.
+
+### Considerations for selecting `intelligent_tuning.metric_targets` values
+
+When choosing values from the `intelligent_tuning.metric_targets` server parameter take the following considerations into account:
+
+* The `NONE` value takes precedence over all other values. If `NONE` is chosen alongside any combination of other values, the parameter will be perceived as set to `NONE`. This is equivalent to `intelligent_tuning = OFF`, implying that no tuning will occur.
+
+* The `ALL` value takes precedence over all other values, with the exception of `NONE` as detailed above. If `ALL` is chosen with any combination, barring `NONE`, all the listed parameters will undergo tuning.
+
+> [!NOTE]
+> The `ALL` value encompasses all existing metric targets. Moreover, this value will also automatically apply to any new metric targets that might be added in the future. This allows for comprehensive and future-proof tuning of your PostgreSQL server.
+
+## Next steps
+
+- [Perform intelligent tuning in Azure Database for PostgreSQL - Flexible Server
+](concepts-intelligent-tuning.md)
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/application-best-practices.md
Here are few tips to keep in mind when you build your database schema and your q
### Use BIGINT or UUID for Primary Keys
-When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/8.1/datatype.html).
+When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/current/datatype.html).
### Use indexes
-There are many types of [indexes](https://www.postgresql.org/docs/9.1/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes.
+There are many types of [indexes](https://www.postgresql.org/docs/current/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes.
### Use autovacuum
Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure
### Use the Query Store
-The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements.
+The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements.
### Optimize bulk inserts and use transient data If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables. It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties. See [how to optimize bulk inserts](how-to-optimize-bulk-inserts.md).
-## Next Steps
-
-[Postgres Guide](http://postgresguide.com/)
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
Note that once the SIM group is created, the encryption type cannot be changed.
|Value |Field name in Azure portal | JSON file parameter name | |||| |The name for the SIM resource. The name must only contain alphanumeric characters, dashes, and underscores. |**SIM name**|`simName`|
- |The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. It's a unique numerical value between 19 and 20 digits in length, beginning with 89. |**ICCID**|`integratedCircuitCardIdentifier`|
+ |The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country/region and issuer. It's a unique numerical value between 19 and 20 digits in length, beginning with 89. |**ICCID**|`integratedCircuitCardIdentifier`|
|The international mobile subscriber identity (IMSI). The IMSI is a unique number (usually 15 digits) identifying a device or user in a mobile network. |**IMSI**|`internationalMobileSubscriberIdentity`| |The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. The Ki must be a 32-character string, containing hexadecimal characters only. |**Ki**|`authenticationKey`| |The derived operator code (OPc). The OPc is derived from the SIM's Ki and the network's operator code (OP), and is used by the packet core to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. |**Opc**|`operatorKeyCode`|
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
You need to agree with the enterprise team which IP subnets and addresses will b
The RAN that you use to broadcast the signal across the enterprise site must comply with local regulations. For example, this could mean: -- The RAN units have completed the process of homologation and received regulatory approval for their use on a certain frequency band in a country.
+- The RAN units have completed the process of homologation and received regulatory approval for their use on a certain frequency band in a country/region.
- You have received permission for the RAN to broadcast using spectrum in a certain location, for example, by grant from a telecom operator, regulatory authority or via a technological solution such as a Spectrum Access System (SAS). - The RAN units in a site have access to high-precision timing sources, such as Precision Time Protocol (PTP) and GPS location services.
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
To begin, collect the values in the following table for each SIM you want to pro
| Value | Field name in Azure portal | JSON file parameter name | |--|--|--| | SIM name. The SIM name must only contain alphanumeric characters, dashes, and underscores. | **SIM name** | `simName` |
-| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. The ICCID is a unique numerical value between 19 and 20 digits in length, beginning with 89. | **ICCID** | `integratedCircuitCardIdentifier` |
+| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country/region and issuer. The ICCID is a unique numerical value between 19 and 20 digits in length, beginning with 89. | **ICCID** | `integratedCircuitCardIdentifier` |
| The international mobile subscriber identity (IMSI). The IMSI is a unique number (usually 15 digits) identifying a device or user in a mobile network. | **IMSI** | `internationalMobileSubscriberIdentity` | | The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. It must be a 32-character string, containing hexadecimal characters only. | **Ki** | `authenticationKey` | | The derived operator code (OPc). The OPc is taken from the SIM's Ki and the network's operator code (OP). The packet core instance uses it to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. | **Opc** | `operatorKeyCode` |
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
This article outlines how to register multiple Azure sources and how to authenti
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**| ||||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Source dependant](create-sensitivity-label.md) | [Yes](#access-policy) | [Source Dependant](catalog-lineage-user-guide.md)| No |
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Source dependent](create-sensitivity-label.md) | [Yes](#access-policy) | [Source Dependent](catalog-lineage-user-guide.md)| No |
## Prerequisites
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/supported-classifications.md
Person Name machine learning model has been trained using global datasets of nam
- ### Person's Address
-Person's address classification is used to detect full address stored in a single column containing the following elements: House number, Street Name, City, State, Country, Zip Code. Person's Address classifier uses machine learning model that is trained on the global addresses data set in English language.
+Person's address classification is used to detect full address stored in a single column containing the following elements: House number, Street Name, City, State, Country/Region, Zip Code. Person's Address classifier uses machine learning model that is trained on the global addresses data set in English language.
#### Supported formats Currently the address model supports the following formats in the same column:
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
Regions are paired for cross-region replication based on proximity and other fac
| US Government |US Gov Arizona\* |US Gov Texas\* | | US Government |US Gov Virginia\* |US Gov Texas\* |
-(\*) Certain regions are access restricted to support specific customer scenarios, such as in-country disaster recovery. These regions are available only upon request by [creating a new support request](/troubleshoot/azure/general/region-access-request-process#reserved-access-regions).
+(\*) Certain regions are access restricted to support specific customer scenarios, such as in-country/region disaster recovery. These regions are available only upon request by [creating a new support request](/troubleshoot/azure/general/region-access-request-process#reserved-access-regions).
> [!IMPORTANT] > - West India is paired in one direction only. West India's secondary region is South India, but South India's secondary region is Central India.
sap Acss Backup Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/acss-backup-integration.md
+
+ Title: Configure and view Backup status for your SAP system on Virtual Instance for SAP solutions (preview)
+description: Learn how to configure and view Backup status for your SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
+++ Last updated : 06/06/2023++
+#Customer intent: As an SAP Basis Admin, I want to understand how to configure backup for my SAP system and monitor it to ensure backups are running as expected.
++
+# Configure and monitor Azure Backup status for your SAP system through Virtual Instance for SAP solutions (Preview)
+
+> [!NOTE]
+> Configuration of Backup from Virtual Instance for SAP solutions feature is currently in Preview.
+
+In this how-to guide, you'll learn to configure and monitor Azure Backup for your SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
+
+When you configure Azure Backup from the VIS resource, you get to enable Backup for all your **Central service and Application server virtual machines** and **HANA Database** in one go. For HANA Database, Azure Center for SAP solutions automates the step of running the [Pre-Registration script](/azure/backup/tutorial-backup-sap-hana-db#what-the-pre-registration-script-does).
+
+Once backup is configured, you can monitor the status of your Backup Jobs for both virtual machines and HANA DB from the VIS.
+
+If you have already configured Backup from Azure Backup Center for your SAP VMs and HANA DB, then VIS resource automatically detects this and enables you to monitor the status of Backup jobs.
+
+Before you can go ahead and use this feature in preview, register for it from the Backup (preview) tab on the Virtual Instance for SAP solutions resource on the Azure portal.
+
+## Prerequisites
+- A Virtual Instance for SAP solutions resource representing your SAP system on Azure Center for SAP solutions.
+- An Azure account with **Contributor** role access on the Subscription in which your SAP system exists.
+- Register **Microsoft.Features** Resource Provider on your subscription.
+- Register your subscription for this preview feature in Azure Center for SAP solutions.
+- After you have successfully registered for the Preview feature, re-register Microsoft.Workloads resource provider on the Subscription.
+- To be able to configure Backup from the VIS resource, assign **Backup Contributor** role access to **Azure Workloads Connector Service** first-party app. This step is not required if you have already configured Backup for your VMs and HANA DB using Azure Backup Center. You will be able to monitor Backup of your SAP system from the VIS.
+- For HANA database backup, ensure the [prerequisites](/azure/backup/tutorial-backup-sap-hana-db#prerequisites) required by Azure Backup are in place.
+- For HANA database backup, create a HDB Userstore key that will be used for preparing HANA DB for configuring Backup.
+
+> [!NOTE]
+> If you are configuring backup for HANA database from the Virtual Instance for SAP solutions resource, you can skip running the [Backup pre-registration script](/azure/backup/tutorial-backup-sap-hana-db#what-the-pre-registration-script-does). Azure Center for SAP solutions runs this script before configuring HANA backup.
+
+## Register for Backup integration preview feature
+Before you can start configuring Backup from the VIS resource or viewing Backup status on VIS resource in case Backup is already configured, you need to register for the Backup integration feature in Azure Center for SAP solutions. Follow these steps to register for the feature:
+
+1. Sign into the [Azure portal](https://portal.azure.com) as a user with **Contributor** role access.
+2. Search for **ACSS** and select **Azure Center for SAP solutions** from search results.
+3. On the left navigation, select **Virtual Instance for SAP solutions**.
+4. Select the **Backup (preview)** tab on the left navigation.
+5. Select the **Register for Preview** button.
+6. Registration for features can take upto 30 minutes and once it is complete, you can configure backup or view status of already configured backup.
+
+## Configure Backup for your SAP system
+You can configure Backup for your Central service and Application server virtual machines and HANA database from the Virtual Instance for SAP solutions resource following these steps:
+
+1. Sign into the [Azure portal](https://portal.azure.com).
+2. Search for **ACSS** and select **Azure Center for SAP solutions** from search results.
+3. On the left navigation, select **Virtual Instance for SAP solutions**.
+4. Select the **Backup (preview)** tab on the left navigation.
+5. If you have not registered for the preview feature, complete the registration process by selecting the **Register** button. This step is needed only once per Subscription.
+6. Select **Configure** button on the Backup (preview) page.
+7. Select the checkboxes **Central service + App server VMs Backup** and **Database Backup**.
+8. For Central service + App server VMs Backup, select an existing Recovery Services vault or Create new.
+ - Select a Backup policy that is to be used for backing up Central service and App server VMs.
+9. For Database Backup, select an existing Recovery Services vault or Create new.
+ - Select a Backup policy that is to be used for backing up HANA database.
+10. Provide a **HANA DB User Store** key name.
+11. If SSL enforce is enabled for the HANA database, provide the key store, trust store path and SSL hostname and crypto provider details.
+
+> [!NOTE]
+> If you are configuring backup for an HSR enabled HANA database from the Virtual Instance for SAP solutions resource, then the [Backup pre-registration script](/azure/backup/tutorial-backup-sap-hana-db#what-the-pre-registration-script-does) is run and backup configured only for the Primary HANA database node. In case of a failover, you will need to configure Backup on the new primary node.
+
+## Monitor Backup status of your SAP system
+After you configure Backup for the Virtual Machines and HANA Database of your SAP system either from the Virtual Instance for SAP solutions resource or from the Backup Center, you can monitor the status of Backup from the Virtual Instance for SAP solutions resource.
+
+To monitor Backup status:
+1. Sign into the [Azure portal](https://portal.azure.com).
+2. Search for **ACSS** and select **Azure Center for SAP solutions** from search results.
+3. On the left navigation, select **Virtual Instance for SAP solutions**.
+4. Select the **Backup (preview)** tab on the left navigation.
+5. If you have not registered for the preview feature, complete the registration process by selecting the **Register** button. This step is needed only once per Subscription.
+6. For Central service + App server VMs and HANA Database, view protection status of **Backup instances** and status of **Backup jobs** in the last 24 hours.
+
+> [!NOTE]
+> For a highly available HANA database, if you have configured Backup using the HSR Backup feature from Backup Center, that would not be detected and displayed under Database Backup section.
+
+## Next steps
+- [Monitor SAP system from the Azure portal](monitor-portal.md)
+- [Get quality checks and insights for a VIS resource](get-quality-checks-insights.md)
+- [Start and Stop SAP systems](start-stop-sap-systems.md)
+- [View Cost Analysis of SAP system](view-cost-analysis.md)
sap Manage With Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-with-azure-rbac.md
# Management of Azure Center for SAP solutions resources with Azure RBAC -- [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) enables granular access management for Azure. You can use Azure RBAC to manage Virtual Instance for SAP solutions resources within Azure Center for SAP solutions. For example, you can separate duties within your team and grant only the amount of access that users need to perform their jobs. *Users* or *user-assigned managed identities* require minimum roles or permissions to use the different capabilities in Azure Center for SAP solutions.
To view VIS resources, a *user* or *user-assigned managed identity* requires the
| Built-in roles for *users* | | - | | **Azure Center for SAP solutions reader** |
-| **Reader** |
| Minimum permissions for *users* | | - |
To view VIS resources, a *user* or *user-assigned managed identity* requires the
| `Microsoft.Workloads/locations/sapVirtualInstanceMetadata/getAvailabilityZoneDetails/action` | | `Microsoft.Insights/Metrics/Read` | | `Microsoft.ResourceHealth/AvailabilityStatuses/read` |
+| `Microsoft.Advisor/configurations/read` |
+| `Microsoft.Advisor/recommendations/read` |
| Built-in roles for *user-assigned managed identities* | | - |
To view Quality Insights, a *user* requires the following role or permissions.
| Built-in roles for *users* | | - |
-| **Reader** |
+| **Azure Center for SAP solutions reader** |
Minimum permissions for *users* | | - |
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
For more information, see the [Azure Information Protection product documentatio
|- [Double Key Encryption (DKE)](/azure/information-protection/plan-implement-tenant-key) | GA | GA | GA | |**Office files** <sup>[3](#aipnote6)</sup> | | | | |- [Protection for Microsoft Exchange Online, Microsoft SharePoint Online, and Microsoft OneDrive for Business](/azure/information-protection/requirements-applications) | GA | GA <sup>[4](#aipnote3)</sup> | GA <sup>[4](#aipnote3)</sup> |
-|- [Protection for on-premises Exchange and SharePoint content via the Rights Management connector](/azure/information-protection/deploy-rms-connector) | GA <sup>[5](#aipnote5)</sup> | Not available | Not available |
+|- [Protection for on-premises Exchange and SharePoint content via the Rights Management connector](/azure/information-protection/deploy-rms-connector) | GA <sup>[5](#aipnote5)</sup> | GA <sup>[6](#aipnote6)</sup> | GA <sup>[6](#aipnote6)</sup> |
|- [Office 365 Message Encryption](/microsoft-365/compliance/set-up-new-message-encryption-capabilities) | GA | GA | GA | |- [Set labels to automatically apply pre-configured M/MIME protection in Outlook](/azure/information-protection/rms-client/clientv2-admin-guide-customizations) | GA | GA | GA |
-|- [Control oversharing of information when using Outlook](/azure/information-protection/rms-client/clientv2-admin-guide-customizations) | GA | GA <sup>[6](#aipnote6)</sup> | GA <sup>[6](#aipnote6)</sup> |
-|**Classification and labeling** <sup>[2](#aipnote2) / [7](#aipnote7)</sup> | | | |
+|- [Control oversharing of information when using Outlook](/azure/information-protection/rms-client/clientv2-admin-guide-customizations) | GA | GA <sup>[7](#aipnote7)</sup> | GA <sup>[7](#aipnote7)</sup> |
+|**Classification and labeling** <sup>[2](#aipnote2) / [8](#aipnote8)</sup> | | | |
|- Custom templates, including departmental templates | GA | GA | GA | |- Manual, default, and mandatory document classification | GA | GA | GA | |- Configure conditions for automatic and recommended classification GA | GA | GA |
For more information, see the [Azure Information Protection product documentatio
<sup><a name="aipnote5"></a>5</sup> Information Rights Management (IRM) is supported only for Microsoft 365 Apps (version 9126.1001 or higher), including Professional Plus (ProPlus) and Click-to-Run (C2R) versions. Office 2010, Office 2013, and other Office 2016 versions are not supported.
-<sup><a name="aipnote6"></a>6</sup> Sharing of protected documents and emails from government clouds to users in the commercial cloud is not currently available. Includes Microsoft 365 Apps users in the commercial cloud, non-Microsoft 365 Apps users in the commercial cloud, and users with an RMS for Individuals license.
+<sup><a name="aipnote6"></a>6</sup> Only on-premises Exchange is supported. Outlook Protection Rules are not supported. On-premises SharePoint is not supported.
+
+<sup><a name="aipnote7"></a>7</sup> Sharing of protected documents and emails from government clouds to users in the commercial cloud is not currently available. Includes Microsoft 365 Apps users in the commercial cloud, non-Microsoft 365 Apps users in the commercial cloud, and users with an RMS for Individuals license.
+
+<sup><a name="aipnote8"></a>8</sup> The number of [Sensitive Information Types](/microsoft-365/compliance/sensitive-information-type-entity-definitions) in your Microsoft Purview compliance portal may vary based on region.
-<sup><a name="aipnote7"></a>7</sup> The number of [Sensitive Information Types](/microsoft-365/compliance/sensitive-information-type-entity-definitions) in your Microsoft Purview compliance portal may vary based on region.
## Microsoft Defender for Cloud
security Physical Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/physical-security.md
Microsoft uses best practice procedures and a wiping solution that is [NIST 800-
Upon a system's end-of-life, Microsoft operational personnel follow rigorous data handling and hardware disposal procedures to assure that hardware containing your data is not made available to untrusted parties. We use a secure erase approach for hard drives that support it. For hard drives that canΓÇÖt be wiped, we use a destruction process that destroys the drive and renders the recovery of information impossible. This destruction process can be to disintegrate, shred, pulverize, or incinerate. We determine the means of disposal according to the asset type. We retain records of the destruction. All Azure services use approved media storage and disposal management services. ## Compliance
-We design and manage the Azure infrastructure to meet a broad set of international and industry-specific compliance standards, such as ISO 27001, HIPAA, FedRAMP, SOC 1, and SOC 2. We also meet country- or region-specific standards, including Australia IRAP, UK G-Cloud, and Singapore MTCS. Rigorous third-party audits, such as those done by the British Standards Institute, verify adherence to the strict security controls these standards mandate.
+We design and manage the Azure infrastructure to meet a broad set of international and industry-specific compliance standards, such as ISO 27001, HIPAA, FedRAMP, SOC 1, and SOC 2. We also meet country-/region-specific standards, including Australia IRAP, UK G-Cloud, and Singapore MTCS. Rigorous third-party audits, such as those done by the British Standards Institute, verify adherence to the strict security controls these standards mandate.
For a full list of compliance standards that Azure adheres to, see the [Compliance offerings](../../compliance/index.yml).
security Protection Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/protection-customer-data.md
Additionally, "encryption by default" using MACsec (an IEEE standard at the data
**Data redundancy**: Microsoft helps ensure that data is protected if there is a cyberattack or physical damage to a datacenter. Customers may opt for: -- In-country/in-region storage for compliance or latency considerations.-- Out-of-country/out-of-region storage for security or disaster recovery purposes.
+- In-country/region storage for compliance or latency considerations.
+- Out-of-country/region storage for security or disaster recovery purposes.
Data can be replicated within a selected geographic area for redundancy but cannot be transmitted outside it. Customers have multiple options for replicating data, including the number of copies and the number and location of replication datacenters.
sentinel Anomalies Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/anomalies-reference.md
Configuration details:
### Suspicious geography change in Palo Alto GlobalProtect account logins
-**Description:** A match indicates that a user logged in remotely from a country that is different from the country of the user's last remote login. This rule might also indicate an account compromise, particularly if the rule matches occurred closely in time. This includes the scenario of impossible travel.
+**Description:** A match indicates that a user logged in remotely from a country/region that is different from the country/region of the user's last remote login. This rule might also indicate an account compromise, particularly if the rule matches occurred closely in time. This includes the scenario of impossible travel.
| Attribute | Value | | -- | |
sentinel Cef Name Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cef-name-mapping.md
The following **CommonSecurityLog** fields are added by Microsoft Sentinel to en
||| | **IndicatorThreatType** | The [MaliciousIP](#MaliciousIP) threat type, according to the threat intelligence feed. | | <a name="MaliciousIP"></a>**MaliciousIP** | Lists any IP addresses in the message that correlates with the current threat intelligence feed. |
-| **MaliciousIPCountry** | The [MaliciousIP](#MaliciousIP) country, according to the geographic information at the time of the record ingestion. |
+| **MaliciousIPCountry** | The [MaliciousIP](#MaliciousIP) country/region, according to the geographic information at the time of the record ingestion. |
| **MaliciousIPLatitude** | The [MaliciousIP](#MaliciousIP) longitude, according to the geographic information at the time of the record ingestion. | | **MaliciousIPLongitude** | The [MaliciousIP](#MaliciousIP) longitude, according to the geographic information at the time of the record ingestion. | | **ReportReferenceLink** | Link to the threat intelligence report. |
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md
If you see that your query would trigger too many or too frequent alerts, you ca
> > - An **event** is a description of a single occurrence of an action. For example, a single entry in a log file could count as an event. In this context an event refers to a single result returned by a query in an analytics rule. >
- > - An **alert** is a collection of events that, taken together, are significant from a security standpoint. An alert could contain a single event if the event had significant security implications - an administrative login from a foreign country outside of office hours, for example.
+ > - An **alert** is a collection of events that, taken together, are significant from a security standpoint. An alert could contain a single event if the event had significant security implications - an administrative login from a foreign country/region outside of office hours, for example.
> > - By the way, what are **incidents**? Microsoft Sentinel's internal logic creates **incidents** from **alerts** or groups of alerts. The incidents queue is the focal point of SOC analysts' work - triage, investigation and remediation. >
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
Information about **entity pages** can now be found at [Investigate entities wit
Using [KQL](/azure/data-explorer/kusto/query/), we can query the Behavioral Analytics Table.
-For example ΓÇô if we want to find all the cases of a user that failed to sign in to an Azure resource, where it was the user's first attempt to connect from a given country, and connections from that country are uncommon even for the user's peers, we can use the following query:
+For example ΓÇô if we want to find all the cases of a user that failed to sign in to an Azure resource, where it was the user's first attempt to connect from a given country/region, and connections from that country/region are uncommon even for the user's peers, we can use the following query:
```Kusto BehaviorAnalytics
sentinel Normalization Schema Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-authentication.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **TargetDvcOs**| Optional| String| The OS of the target device. <br><br>Example: `Windows 10`| | **TargetPortNumber** |Optional |Integer |The port of the target device.| | **TargetGeoCountry** | Optional | Country | The country associated with the target IP address.<br><br>Example: `USA` |
-| **TargetGeoRegion** | Optional | Region | The region within a country associated with the target IP address.<br><br>Example: `Vermont` |
+| **TargetGeoRegion** | Optional | Region | The region associated with the target IP address.<br><br>Example: `Vermont` |
| **TargetGeoCity** | Optional | City | The city associated with the target IP address.<br><br>Example: `Burlington` | | **TargetGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the target IP address.<br><br>Example: `44.475833` | | **TargetGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the target IP address.<br><br>Example: `73.211944` |
sentinel Normalization Schema Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-dns.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **SrcPortNumber** | Optional | Integer | Source port of the DNS query.<br><br>Example: `54312` | | <a name="ipaddr"></a>**IpAddr** | Alias | | Alias to [SrcIpAddr](#srcipaddr) | | **SrcGeoCountry** | Optional | Country | The country associated with the source IP address.<br><br>Example: `USA` |
-| **SrcGeoRegion** | Optional | Region | The region within a country associated with the source IP address.<br><br>Example: `Vermont` |
+| **SrcGeoRegion** | Optional | Region | The region associated with the source IP address.<br><br>Example: `Vermont` |
| **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` | | **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` | | **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="dst"></a>**Dst** | Alias | String | A unique identifier of the server that received the DNS request. <br><br>This field may alias the [DstDvcId](#dstdvcid), [DstHostname](#dsthostname), or [DstIpAddr](#dstipaddr) fields. <br><br>Example: `192.168.12.1` | | <a name="dstipaddr"></a>**DstIpAddr** | Optional | IP Address | The IP address of the server that received the DNS request. For a regular DNS request, this value would typically be the reporting device, and in most cases set to `127.0.0.1`.<br><br>Example: `127.0.0.1` | | **DstGeoCountry** | Optional | Country | The country associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `USA` |
-| **DstGeoRegion** | Optional | Region | The region, or state, within a country associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Vermont` |
+| **DstGeoRegion** | Optional | Region | The region, or state, associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Vermont` |
| **DstGeoCity** | Optional | City | The city associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Burlington` | | **DstGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `44.475833` | | **DstGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `73.211944` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
|<a name="dnssessionid"></a>**DnsSessionId** | Optional | string | The DNS session identifier as reported by the reporting device. This value is different from [TransactionIdHex](#transactionidhex), the DNS query unique ID as assigned by the DNS client.<br><br>Example: `EB4BFA28-2EAD-4EF7-BC8A-51DF4FDF5B55` | | **SessionId** | Alias | | Alias to [DnsSessionId](#dnssessionid) | | **DnsResponseIpCountry** | Optional | Country | The country associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `USA` |
-| **DnsResponseIpRegion** | Optional | Region | The region, or state, within a country associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Vermont` |
+| **DnsResponseIpRegion** | Optional | Region | The region, or state, associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Vermont` |
| **DnsResponseIpCity** | Optional | City | The city associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Burlington` | | **DnsResponseIpLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `44.475833` | | **DnsResponseIpLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `73.211944` |
sentinel Normalization Schema File Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-file-event.md
The following fields represent information about the system initiating the file
| **SrcDeviceType** | Optional | DeviceType | The type of the source device. For a list of allowed values and further information, refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). | | <a name="srcsubscriptionid"></a>**SrcSubscriptionId** | Optional | String | The cloud platform subscription ID the source device belongs to. **SrcSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. | | **SrcGeoCountry** | Optional | Country | The country associated with the source IP address.<br><br>Example: `USA` |
-| **SrcGeoRegion** | Optional | Region | The region within a country associated with the source IP address.<br><br>Example: `Vermont` |
+| **SrcGeoRegion** | Optional | Region | The region associated with the source IP address.<br><br>Example: `Vermont` |
| **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` | | **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` | | **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` |
sentinel Normalization Schema Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-network.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **OuterVlanId** | Optional | Alias | Alias to [DstVlanId](#dstvlanid). <br><br>In many cases, the VLAN can't be determined as a source or a destination but is characterized as inner or outer. This alias to signifies that [DstVlanId](#dstvlanid) should be used when the VLAN is characterized as outer. | | <a name="dstsubscription"></a>**DstSubscriptionId** | Optional | String | The cloud platform subscription ID the destination device belongs to. **DstSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. | | **DstGeoCountry** | Optional | Country | The country associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `USA` |
-| **DstGeoRegion** | Optional | Region | The region, or state, within a country associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Vermont` |
+| **DstGeoRegion** | Optional | Region | The region, or state, associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Vermont` |
| **DstGeoCity** | Optional | City | The city associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Burlington` | | **DstGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `44.475833` | | **DstGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `73.211944` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **InnerVlanId** | Optional | Alias | Alias to [SrcVlanId](#srcvlanid). <br><br>In many cases, the VLAN can't be determined as a source or a destination but is characterized as inner or outer. This alias to signifies that [SrcVlanId](#srcvlanid) should be used when the VLAN is characterized as inner. | | <a name="srcsubscription"></a>**SrcSubscriptionId** | Optional | String | The cloud platform subscription ID the source device belongs to. **SrcSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. | | **SrcGeoCountry** | Optional | Country | The country associated with the source IP address.<br><br>Example: `USA` |
-| **SrcGeoRegion** | Optional | Region | The region within a country associated with the source IP address.<br><br>Example: `Vermont` |
+| **SrcGeoRegion** | Optional | Region | The region associated with the source IP address.<br><br>Example: `Vermont` |
| **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` | | **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` | | **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` |
sentinel Normalization Schema User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-user-management.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **SrcDvcIdType** | Optional | Enumerated | The type of [SrcDvcId](#srcdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the preceding list, and store the others in **SrcDvcAzureResourceId** and **SrcDvcMDEid**, respectively.<br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. | | **SrcDeviceType** | Optional | Enumerated | The type of the source device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` | | **SrcGeoCountry** | Optional | Country | The country associated with the source IP address.<br><br>Example: `USA` |
-| **SrcGeoRegion** | Optional | Region | The region within a country associated with the source IP address.<br><br>Example: `Vermont` |
+| **SrcGeoRegion** | Optional | Region | The region associated with the source IP address.<br><br>Example: `Vermont` |
| **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` | | **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` | | **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` |
sentinel Normalization Schema V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-v1.md
Below is the schema of the network sessions table, versioned 1.0.0
| **DstDvcMacAddr** | String | 06:10:9f:eb:8f:14 | The destination MAC address of a device that is not directly associated with the network packet. | Destination,<br>Device,<br>MAC | | **DstDvcDomain** | String | CONTOSO | The Domain of the destination device. | Destination,<br>Device | | **DstPortNumber** | Integer | 443 | The destination IP port. | Destination,<br>Port |
-| **DstGeoRegion** | Region (String) | Vermont | The region within a country associated with the destination IP address | Destination,<br>Geo |
+| **DstGeoRegion** | Region (String) | Vermont | The region associated with the destination IP address | Destination,<br>Geo |
| **DstResourceId** | Device ID (String) | /subscriptions/3c1bb38c-82e3-4f8d-a115-a7110ba70d05 /resourcegroups/contoso77/providers /microsoft.compute/virtualmachines /victim | The resource ID of the destination device. | Destination | | **DstNatIpAddr** | IP address | 2::1 | If reported by an intermediary NAT device such as a firewall, the IP address used by the NAT device for communication with the source. | Destination NAT,<br>IP | | **DstNatPortNumber** | int | 443 | If reported by an intermediary NAT device such as a firewall, the port used by the NAT device for communication with the source. | Destination NAT,<br>Port |
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Title: Roles and permissions in Microsoft Sentinel
description: Learn how Microsoft Sentinel assigns permissions to users using Azure role-based access control, and identify the allowed actions for each role. Previously updated : 07/14/2022 Last updated : 06/06/2023
Use Azure RBAC to create and assign roles within your security operations team t
Users with particular job requirements may need to be assigned other roles or specific permissions in order to accomplish their tasks. -- **Working with playbooks to automate responses to threats**
+- **Install and manage out-of-the-box content**
- Microsoft Sentinel uses **playbooks** for automated threat response. Playbooks are built on **Azure Logic Apps**, and are a separate Azure resource. For specific members of your security operations team, you might want to assign the ability to use Logic Apps for Security Orchestration, Automation, and Response (SOAR) operations. You can use the [**Microsoft Sentinel Playbook Operator**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) role to assign explicit, limited permission for running playbooks, and the [**Logic App Contributor**](../role-based-access-control/built-in-roles.md#logic-app-contributor) role to create and edit playbooks.
+ Find packaged solutions for end-to-end products or standalone content from the content hub in Microsoft Sentinel. To install and manage content from the content hub, assign the [**Template Spec Contributor**](../role-based-access-control/built-in-roles.md#template-spec-contributor) role at the resource group level.
+
+- **Automate responses to threats with playbooks**
-- **Giving Microsoft Sentinel permissions to run playbooks**
+ Microsoft Sentinel uses playbooks for automated threat response. Playbooks are built on Azure Logic Apps, and are a separate Azure resource. For specific members of your security operations team, you might want to assign the ability to use Logic Apps for Security Orchestration, Automation, and Response (SOAR) operations. You can use the [**Microsoft Sentinel Playbook Operator**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) role to assign explicit, limited permission for running playbooks, and the [**Logic App Contributor**](../role-based-access-control/built-in-roles.md#logic-app-contributor) role to create and edit playbooks.
+
+- **Give Microsoft Sentinel permissions to run playbooks**
Microsoft Sentinel uses a special service account to run incident-trigger playbooks manually or to call them from automation rules. The use of this account (as opposed to your user account) increases the security level of the service. For an automation rule to run a playbook, this account must be granted explicit permissions to the resource group where the playbook resides. At that point, any automation rule can run any playbook in that resource group. To grant these permissions to this service account, your account must have **Owner** permissions to the resource groups containing the playbooks. -- **Connecting data sources to Microsoft Sentinel**
+- **Connect data sources to Microsoft Sentinel**
- For a user to add **data connectors**, you must assign the user write permissions on the Microsoft Sentinel workspace. Note the required extra permissions for each connector, as listed on the relevant connector page.
+ For a user to add data connectors, you must assign the user **Write** permissions on the Microsoft Sentinel workspace. Notice the required extra permissions for each connector, as listed on the relevant connector page.
-- **Guest users assigning incidents**
+- **Allow guest users to assign incidents**
- If a guest user needs to be able to assign incidents, you need to assign the [Directory Reader](../active-directory/roles/permissions-reference.md#directory-readers) to the user, in addition to the Microsoft Sentinel Responder role. Note that the Directory Reader role is *not* an Azure role but an **Azure Active Directory** role, and that regular (non-guest) users have this role assigned by default.
+ If a guest user needs to be able to assign incidents, you need to assign the [**Directory Reader**](../active-directory/roles/permissions-reference.md#directory-readers) to the user, in addition to the **Microsoft Sentinel Responder** role. Note that the Directory Reader role is *not* an Azure role but an Azure Active Directory role, and that regular (non-guest) users have this role assigned by default.
-- **Creating and deleting workbooks**
+- **Create and delete workbooks**
- To create and delete a Microsoft Sentinel workbook, the user needs either the Microsoft Sentinel Contributor role or a lesser Microsoft Sentinel role, together with the [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) Azure Monitor role. This role isn't necessary for *using* workbooks, only for creating and deleting.
+ To create and delete a Microsoft Sentinel workbook, the user needs either the **Microsoft Sentinel Contributor** role or a lesser Microsoft Sentinel role, together with the [**Workbook Contributor**](../role-based-access-control/built-in-roles.md#workbook-contributor) Azure Monitor role. This role isn't necessary for *using* workbooks, only for creating and deleting.
### Azure and Log Analytics roles you might see assigned
For example, a user assigned the **Microsoft Sentinel Reader** role, but not the
This table summarizes the Microsoft Sentinel roles and their allowed actions in Microsoft Sentinel.
-| Role | View and run playbooks | Create and edit playbooks | Create and edit analytics rules, workbooks, and other Microsoft Sentinel resources | Manage incidents (dismiss, assign, etc.) | View data, incidents, workbooks, and other Microsoft Sentinel resources |
-|||||||
-| Microsoft Sentinel Reader | -- | -- | --[*](#workbooks) | -- | &#10003; |
-| Microsoft Sentinel Responder | -- | -- | --[*](#workbooks) | &#10003; | &#10003; |
-| Microsoft Sentinel Contributor | -- | -- | &#10003; | &#10003; | &#10003; |
-| Microsoft Sentinel Playbook Operator | &#10003; | -- | -- | -- | -- |
-| Logic App Contributor | &#10003; | &#10003; | -- | -- | -- |
-
+| Role | View and run playbooks | Create and edit playbooks | Create and edit analytics rules, workbooks, and other Microsoft Sentinel resources | Manage incidents (dismiss, assign, etc.) | View data, incidents, workbooks, and other Microsoft Sentinel resources | Install and manage content from the content hub|
+|||||||--|
+| Microsoft Sentinel Reader | -- | -- | --[*](#workbooks) | -- | &#10003; | --|
+| Microsoft Sentinel Responder | -- | -- | --[*](#workbooks) | &#10003; | &#10003; | --|
+| Microsoft Sentinel Contributor | -- | -- | &#10003; | &#10003; | &#10003; | --|
+| Microsoft Sentinel Playbook Operator | &#10003; | -- | -- | -- | -- | --|
+| Logic App Contributor | &#10003; | &#10003; | -- | -- | -- |-- |
+| Template Spec Contributor | -- | -- | -- | -- | -- |&#10003; |
<a name=workbooks></a>* Users with these roles can create and delete workbooks with the [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) role. Learn about [Other roles and permissions](#other-roles-and-permissions).
After understanding how roles and permissions work in Microsoft Sentinel, you ca
| | [Microsoft Sentinel Playbook Operator](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run playbooks. | |**Security engineers** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) |Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. <br><br>Create and edit workbooks, analytics rules, and other Microsoft Sentinel resources. | | | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run and modify playbooks. |
+||[Template Spec Contributor](../role-based-access-control/built-in-roles.md#template-spec-contributor)|Microsoft Sentinel's resource group |Install and manage content from the content hub.|
| **Service Principal** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) | Microsoft Sentinel's resource group | Automated configuration for management tasks |
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 05/01/2023 Last updated : 06/08/2023 # What's new in Microsoft Sentinel
The listed features were released in the last three months. For information abou
See these [important announcements](#announcements) about recent changes to features and services.
+> [!TIP]
+> Get notified when this page is updated by copying and pasting the following URL into your feed reader:
+>
+> `https://aka.ms/sentinel/rss`
+ [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## June 2023
service-bus-messaging Advanced Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/advanced-features-overview.md
Title: Azure Service Bus messaging - advanced features description: This article provides a high-level overview of advanced features in Azure Service Bus. Previously updated : 01/24/2022 Last updated : 06/08/2023 # Azure Service Bus - advanced features
All Service Bus queues and topics' subscriptions have associated dead-letter que
Messages in the dead-letter queue are annotated with the reason why they've been placed there. The dead-letter queue has a special endpoint, but otherwise acts like any regular queue. An application or tool can browse a DLQ or dequeue from it. You can also autoforward out of a dead-letter queue. For more information, see [Overview of Service Bus dead-letter queues](service-bus-dead-letter-queues.md). ## Scheduled delivery
-You can submit messages to a queue or a topic for delayed processing, setting a time when the message will become available for consumption. Scheduled messages can also be canceled. For more information, see [Scheduled messages](message-sequencing.md#scheduled-messages).
+You can submit messages to a queue or a topic for delayed processing, setting a time when the message becomes available for consumption. Scheduled messages can also be canceled. For more information, see [Scheduled messages](message-sequencing.md#scheduled-messages).
## Message deferral A queue or subscription client can defer retrieval of a received message until a later time. The message may have been posted out of an expected order and the client wants to wait until it receives another message. Deferred messages remain in the queue or subscription and must be reactivated explicitly using their service-assigned sequence number. For more information, see [Message deferral](message-deferral.md).
Autodelete on idle enables you to specify an idle interval after which a queue o
The duplicate detection feature enables the sender to resend the same message again and for the broker to drop a potential duplicate. For more information, see [Duplicate detection](duplicate-detection.md). ## Support ordering
-The **Support ordering** feature allows you to specify whether messages that are sent to a topic will be forwarded to the subscription in the same order in which they were sent. This feature doesn't support partitioned topics. For more information, see [TopicProperties.SupportOrdering](/dotnet/api/azure.messaging.servicebus.administration.topicproperties.supportordering) in .NET or [TopicProperties.setOrderingSupported](/java/api/com.azure.messaging.servicebus.administration.models.topicproperties.setorderingsupported) in Java.
+The **Support ordering** feature allows you to specify whether messages that are sent to a topic are forwarded to the subscription in the same order in which they were sent. This feature doesn't support partitioned topics. For more information, see [TopicProperties.SupportOrdering](/dotnet/api/azure.messaging.servicebus.administration.topicproperties.supportordering) in .NET or [TopicProperties.setOrderingSupported](/java/api/com.azure.messaging.servicebus.administration.models.topicproperties.setorderingsupported) in Java.
## Geo-disaster recovery When an Azure region experiences downtime, the disaster recovery feature enables message processing to continue operating in a different region or data center. The feature keeps a structural mirror of a namespace available in the secondary region and allows the namespace identity to switch to the secondary namespace. Already posted messages remain in the former primary namespace for recovery once the availability episode subsides. For more information, see [Azure Service Bus Geo-disaster recovery](service-bus-geo-dr.md).
service-bus-messaging Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/duplicate-detection.md
Title: Azure Service Bus duplicate message detection | Microsoft Docs
description: This article explains how you can detect duplicates in Azure Service Bus messages. The duplicate message can be ignored and dropped. Previously updated : 05/31/2022 Last updated : 06/08/2023 # Duplicate detection
Duplicate detection takes the doubt out of these situations by enabling the send
> The basic tier of Service Bus doesn't support duplicate detection. The standard and premium tiers support duplicate detection. For differences between these tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/). ## How it works
-Enabling duplicate detection helps keep track of the application-controlled *MessageId* of all messages sent into a queue or topic during a specified time window. If any new message is sent with *MessageId* that was logged during the time window, the message is reported as accepted (the send operation succeeds), but the newly sent message is instantly ignored and dropped. No other parts of the message other than the *MessageId* are considered.
+Enabling duplicate detection helps keep track of the application-controlled `MessageId` of all messages sent into a queue or topic during a specified time window. If any new message is sent with `MessageId` that was logged during the time window, the message is reported as accepted (the send operation succeeds), but the newly sent message is instantly ignored and dropped. No other parts of the message other than the `MessageId` are considered.
-Application control of the identifier is essential, because only that allows the application to tie the *MessageId* to a business process context from which it can be predictably reconstructed when a failure occurs.
+Application control of the identifier is essential, because only that allows the application to tie the `MessageId` to a business process context from which it can be predictably reconstructed when a failure occurs.
-For a business process in which multiple messages are sent in the course of handling some application context, the *MessageId* may be a composite of the application-level context identifier, such as a purchase order number, and the subject of the message, for example, **12345.2017/payment**.
+For a business process in which multiple messages are sent in the course of handling some application context, the `MessageId` may be a composite of the application-level context identifier, such as a purchase order number, and the subject of the message, for example, **12345.2017/payment**.
-The *MessageId* can always be some GUID, but anchoring the identifier to the business process yields predictable repeatability, which is desired for using the duplicate detection feature effectively.
+The `MessageId` can always be some GUID, but anchoring the identifier to the business process yields predictable repeatability, which is desired for using the duplicate detection feature effectively.
> [!IMPORTANT] >- When **partitioning** is **enabled**, `MessageId+PartitionKey` is used to determine uniqueness. When sessions are enabled, partition key and session ID must be the same. >- When **partitioning** is **disabled** (default), only `MessageId` is used to determine uniqueness.
->- For information about SessionId, PartitionKey, and MessageId, see [Use of partition keys](service-bus-partitioning.md#use-of-partition-keys).
+>- For information about `SessionId`, `PartitionKey`, and `MessageId`, see [Use of partition keys](service-bus-partitioning.md#use-of-partition-keys).
## Duplicate detection window size
-Apart from just enabling duplicate detection, you can also configure the size of the duplicate detection history time window during which message-ids are retained.
-This value defaults to 10 minutes for queues and topics, with a minimum value of 20 seconds to maximum value of 7 days.
+Apart from just enabling duplicate detection, you can also configure the size of the duplicate detection history time window during which message-ids are retained. This value defaults to 10 minutes for queues and topics, with a minimum value of 20 seconds to maximum value of 7 days.
-Enabling duplicate detection and the size of the window directly impact the queue (and topic) throughput, since all recorded message-ids must be matched against the newly submitted message identifier.
+Enabling duplicate detection and the size of the window directly impact the queue (and topic) throughput, since all recorded message IDs must be matched against the newly submitted message identifier.
-Keeping the window small means that fewer message-ids must be retained and matched, and throughput is impacted less. For high throughput entities that require duplicate detection, you should keep the window as small as possible.
+Keeping the window small means that fewer message IDs must be retained and matched, and throughput is impacted less. For high throughput entities that require duplicate detection, you should keep the window as small as possible.
## Next steps You can enable duplicate message detection using Azure portal, PowerShell, CLI, Resource Manager template, .NET, Java, Python, and JavaScript. For more information, see [Enable duplicate message detection](enable-duplicate-detection.md).
Try the samples in the language of your choice to explore Azure Service Bus feat
- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
-Find samples for the older .NET and Java client libraries below:
+See samples for the older .NET and Java client libraries here:
- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Message Browsing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-browsing.md
Title: Azure Service Bus - message browsing description: Browse and peek Service Bus messages enables an Azure Service Bus client to enumerate all messages in a queue or subscription. Previously updated : 05/31/2022 Last updated : 06/08/2023 # Message browsing
Peek works on queues, subscriptions, and their dead-letter queues.
When called repeatedly, the peek operation enumerates all messages in the queue or subscription, in order, from the lowest available sequence number to the highest. ItΓÇÖs the order in which messages were enqueued, not the order in which messages might eventually be retrieved.
-You can also pass a SequenceNumber to a peek operation. It will be used to determine where to start peeking from. You can make subsequent calls to the peek operation without specifying the parameter to enumerate further.
+You can also pass a SequenceNumber to a peek operation. It's used to determine where to start peeking from. You can make subsequent calls to the peek operation without specifying the parameter to enumerate further.
## Next steps Try the samples in the language of your choice to explore Azure Service Bus features.
service-bus-messaging Message Deferral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-deferral.md
Title: Azure Service Bus - message deferral description: This article explains how to defer delivery of Azure Service Bus messages. The message remains in the queue or subscription, but it's set aside. Previously updated : 05/31/2022 Last updated : 06/08/2023 # Message deferral
Try the samples in the language of your choice to explore Azure Service Bus feat
- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - see the **advanced/deferral.js** sample. - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/) - see the **advanced/deferral.ts** sample.
-Find samples for the older .NET and Java client libraries below:
+See samples for the older .NET and Java client libraries here:
- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - See the **Deferral** sample. - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
service-bus-messaging Message Expiration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-expiration.md
Title: Azure Service Bus - message expiration description: This article explains about expiration and time to live (TTL) of Azure Service Bus messages. After such a deadline, the message is no longer delivered. Previously updated : 02/18/2022- Last updated : 06/08/2023 # Azure Service Bus - Message expiration (Time to Live)
If the message is protected from expiration while under lock and if the flag is
The combination of time-to-live and automatic (and transactional) dead-lettering on expiry are a valuable tool for establishing confidence in whether a job given to a handler or a group of handlers under a deadline is retrieved for processing as the deadline is reached.
-For example, consider a web site that needs to reliably execute jobs on a scale-constrained backend, and which occasionally experiences traffic spikes or wants to be insulated against availability episodes of that backend. In the regular case, the server-side handler for the submitted user data pushes the information into a queue and subsequently receives a reply confirming successful handling of the transaction into a reply queue. If there is a traffic spike and the backend handler can't process its backlog items in time, the expired jobs are returned on the dead-letter queue. The interactive user can be notified that the requested operation will take a little longer than usual, and the request can then be put on a different queue for a processing path where the eventual processing result is sent to the user by email.
+For example, consider a web site that needs to reliably execute jobs on a scale-constrained backend, and which occasionally experiences traffic spikes or wants to be insulated against availability episodes of that backend. In the regular case, the server-side handler for the submitted user data pushes the information into a queue and subsequently receives a reply confirming successful handling of the transaction into a reply queue. If there's a traffic spike and the backend handler can't process its backlog items in time, the expired jobs are returned on the dead-letter queue. The interactive user can be notified that the requested operation will take a little longer than usual, and the request can then be put on a different queue for a processing path where the eventual processing result is sent to the user by email.
## Temporary entities
To learn more about Service Bus messaging, see the following articles:
- [Message transfers, locks, and settlement](message-transfers-locks-settlement.md) - [Dead-letter queues](service-bus-dead-letter-queues.md) - [Message deferral](message-deferral.md)-- [Pre-fetch messages](service-bus-prefetch.md)
+- [Prefetch messages](service-bus-prefetch.md)
- [Autoforward messages](service-bus-auto-forwarding.md) - [Transaction support](service-bus-transactions.md) - [Geo-disaster recovery](service-bus-geo-dr.md)
service-bus-messaging Message Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sequencing.md
Title: Azure Service Bus message sequencing and timestamps | Microsoft Docs description: This article explains how to preserve sequencing and ordering (with timestamps) of Azure Service Bus messages. Previously updated : 05/31/2022 Last updated : 06/06/2023 # Message sequencing and timestamps
Sequencing and timestamping are two features that are always enabled on all Serv
For those cases in which absolute order of messages is significant and/or in which a consumer needs a trustworthy unique identifier for messages, the broker stamps messages with a gap-free, increasing sequence number relative to the queue or topic. For partitioned entities, the sequence number is issued relative to the partition.
+## Sequence number
The **SequenceNumber** value is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its internal identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers roll over to zero when the 48/64-bit range is exhausted. The sequence number can be trusted as a unique identifier since it's assigned by a central and neutral authority and not by clients. It also represents the true order of arrival, and is more precise than a time stamp as an order criterion, because time stamps may not have a high enough resolution at extreme message rates and may be subject to (however minimal) clock skew in situations where the broker ownership transitions between nodes. The absolute arrival order matters, for example, in business scenarios in which a limited number of offered goods are served on a first-come-first-served basis while supplies last; concert ticket sales are an example.
+## Timestamp
The time-stamping capability acts as a neutral and trustworthy authority that accurately captures the UTC time of arrival of a message, reflected in the **EnqueuedTimeUtc** property. The value is useful if a business scenario depends on deadlines, such as whether a work item was submitted on a certain date before midnight, but the processing is far behind the queue backlog. > [!NOTE]
You can submit messages to a queue or topic for delayed processing; for example,
Scheduled messages don't materialize in the queue until the defined enqueue time. Before that time, scheduled messages can be canceled. Cancellation deletes the message. You can schedule messages using any of our clients in two ways:+ - Use the regular send API, but set the `ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc` property on the message before sending.-- Use the schedule message API, pass both the normal message and the scheduled time. This will return the scheduled message's **SequenceNumber**, which you can later use to cancel the scheduled message if needed.
+- Use the schedule message API, pass both the normal message and the scheduled time. The API returns the scheduled message's **SequenceNumber**, which you can later use to cancel the scheduled message if needed.
Scheduled messages and their sequence numbers can also be discovered using [message browsing](message-browsing.md).
service-bus-messaging Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/network-security.md
Title: Network security for Azure Service Bus description: This article describes network security features such as service tags, IP firewall rules, service endpoints, and private endpoints. Previously updated : 09/20/2021 Last updated : 06/08/2023
By default, Service Bus namespaces are accessible from internet as long as the r
This feature is helpful in scenarios in which Azure Service Bus should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Service Bus with [Azure Express Route](../expressroute/expressroute-introduction.md), you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses or addresses of a corporate NAT gateway.
-The IP firewall rules are applied at the Service Bus namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that does not match an allowed IP rule on the Service Bus namespace is rejected as unauthorized. The response does not mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
+The IP firewall rules are applied at the Service Bus namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that doesn't match an allowed IP rule on the Service Bus namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
For more information, see [How to configure IP firewall for a Service Bus namespace](service-bus-ip-filtering.md)
That means your security sensitive cloud solutions not only gain access to Azure
Binding a Service Bus namespace to a virtual network is a two-step process. You first need to create a **Virtual Network service endpoint** on a Virtual Network subnet and enable it for **Microsoft.ServiceBus** as explained in the [service endpoint overview](service-bus-service-endpoints.md). Once you have added the service endpoint, you bind the Service Bus namespace to it with a **virtual network rule**.
-The virtual network rule is an association of the Service Bus namespace with a virtual network subnet. While the rule exists, all workloads bound to the subnet are granted access to the Service Bus namespace. Service Bus itself never establishes outbound connections, does not need to gain access, and is therefore never granted access to your subnet by enabling this rule.
+The virtual network rule is an association of the Service Bus namespace with a virtual network subnet. While the rule exists, all workloads bound to the subnet are granted access to the Service Bus namespace. Service Bus itself never establishes outbound connections, doesn't need to gain access, and is therefore never granted access to your subnet by enabling this rule.
For more information, see [How to configure virtual network service endpoints for a Service Bus namespace](service-bus-service-endpoints.md)
service-bus-messaging Service Bus Amqp Protocol Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-protocol-guide.md
Title: AMQP 1.0 in Azure Service Bus and Event Hubs protocol guide | Microsoft Docs description: Protocol guide to expressions and description of AMQP 1.0 in Azure Service Bus and Event Hubs Previously updated : 05/31/2022 Last updated : 06/08/2023 # AMQP 1.0 in Azure Service Bus and Event Hubs protocol guide
When discussing advanced capabilities of Azure Service Bus, such as message brow
AMQP is a framing and transfer protocol. Framing means that it provides structure for binary data streams that flow in either direction of a network connection. The structure provides delineation for distinct blocks of data, called *frames*, to be exchanged between the connected parties. The transfer capabilities make sure that both communicating parties can establish a shared understanding about when frames shall be transferred, and when transfers shall be considered complete.
-Unlike earlier expired draft versions produced by the AMQP working group that are still in use by a few message brokers, the working group's final, and standardized AMQP 1.0 protocol doesn't prescribe the presence of a message broker or any particular topology for entities inside a message broker.
+Unlike earlier expired draft versions from the AMQP working group that are still in use by a few message brokers, the working group's final, and standardized AMQP 1.0 protocol doesn't prescribe the presence of a message broker or any particular topology for entities inside a message broker.
The protocol can be used for symmetric peer-to-peer communication, for interaction with message brokers that support queues and publish/subscribe entities, as Azure Service Bus does. It can also be used for interaction with messaging infrastructure where the interaction patterns are different from regular queues, as is the case with Azure Event Hubs. An event hub acts like a queue when events are sent to it, but acts more like a serial storage service when events are read from it; it somewhat resembles a tape drive. The client picks an offset into the available data stream and is then served all events from that offset to the latest available.
A special form of rejection is the *released* state, which indicates that the re
The AMQP 1.0 specification defines a further disposition state called *received*, that specifically helps to handle link recovery. Link recovery allows reconstituting the state of a link and any pending deliveries on top of a new connection and session, when the prior connection and session were lost.
-Service Bus does not support link recovery; if the client loses the connection to Service Bus with an unsettled message transfer pending, that message transfer is lost, and the client must reconnect, reestablish the link, and retry the transfer.
+Service Bus doesn't support link recovery; if the client loses the connection to Service Bus with an unsettled message transfer pending, that message transfer is lost, and the client must reconnect, reestablish the link, and retry the transfer.
As such, Service Bus and Event Hubs support "at least once" transfer where the sender can be assured for the message having been stored and accepted, but don't support "exactly once" transfers at the AMQP level, where the system would attempt to recover the link and continue to negotiate the delivery state to avoid duplication of the message transfer.
The operations are grouped by an identifier `txn-id`.
For transactional interaction, the client acts as a `transaction controller` , which controls the operations that should be grouped together. Service Bus Service acts as a `transactional resource` and performs work as requested by the `transaction controller`.
-The client and service communicate over a `control link` , which is established by the client. The `declare` and `discharge` messages are sent by the controller over the control link to allocate and complete transactions respectively (they don't represent the demarcation of transactional work). The actual send/receive is not performed on this link. Each transactional operation requested is explicitly identified with the desired `txn-id` and therefore may occur on any link on the Connection. If the control link is closed while there exist non-discharged transactions it created, then all such transactions are immediately rolled back, and attempts to perform further transactional work on them will lead to failure. Messages on control link must not be pre settled.
+The client and service communicate over a `control link` , which is established by the client. The `declare` and `discharge` messages are sent by the controller over the control link to allocate and complete transactions respectively (they don't represent the demarcation of transactional work). The actual send/receive isn't performed on this link. Each transactional operation requested is explicitly identified with the desired `txn-id` and therefore may occur on any link on the Connection. If the control link is closed while there exist non-discharged transactions it created, then all such transactions are immediately rolled back, and attempts to perform further transactional work on them will lead to failure. Messages on control link must not be pre settled.
Every connection has to initiate its own control link to be able to start and end transactions. The service defines a special target that functions as a `coordinator`. The client/controller establishes a control link to this target. Control link is outside the boundary of an entity, that is, same control link can be used to initiate and discharge transactions for multiple entities.
The controller concludes the transactional work by sending a `discharge` message
#### Sending a message in a transaction
-All transactional work is done with the transactional delivery state `transactional-state` that carries the txn-id. In the case of sending messages, the transactional-state is carried by the message's transfer frame.
+All transactional work is done with the transactional delivery state `transactional-state` that carries the txn-id. When sending messages, the transactional-state is carried by the message's transfer frame.
| Client (Controller) | Direction | Service Bus (Coordinator) | | : | :: | : |
Having that pair of links in place, the request/response implementation is strai
The pattern obviously requires that the client container and the client-generated identifier for the reply destination are unique across all clients and, for security reasons, also difficult to predict.
-The message exchanges used for the management protocol and for all other protocols that use the same pattern happen at the application level; they do not define new AMQP protocol-level gestures. That's intentional, so that applications can take immediate advantage of these extensions with compliant AMQP 1.0 stacks.
+The message exchanges used for the management protocol and for all other protocols that use the same pattern happen at the application level; they don't define new AMQP protocol-level gestures. That's intentional, so that applications can take immediate advantage of these extensions with compliant AMQP 1.0 stacks.
-Service Bus does not currently implement any of the core features of the management specification, but the request/response pattern defined by the management specification is foundational for the claims-based-security feature and for nearly all of the advanced capabilities discussed in the following sections:
+Service Bus doesn't currently implement any of the core features of the management specification, but the request/response pattern defined by the management specification is foundational for the claims-based-security feature and for nearly all of the advanced capabilities discussed in the following sections:
### Claims-based authorization
service-bus-messaging Service Bus Messages Payloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messages-payloads.md
Title: Azure Service Bus messages, payloads, and serialization | Microsoft Docs description: This article provides an overview of Azure Service Bus messages, payloads, message routing, and serialization. Previously updated : 05/31/2022 Last updated : 06/08/2023 # Messages, payloads, and serialization
-Microsoft Azure Service Bus handles messages. Messages carry a payload and metadata. The metadata is in the form of key-value pair properties, and describes the payload, and gives handling instructions to Service Bus and applications. Occasionally, that metadata alone is sufficient to carry the information that the sender wants to communicate to receivers, and the payload remains empty.
+Azure Service Bus handles messages. Messages carry a payload and metadata. The metadata is in the form of key-value pairs, and describes the payload, and gives handling instructions to Service Bus and applications. Occasionally, that metadata alone is sufficient to carry the information that the sender wants to communicate to receivers, and the payload remains empty.
The object model of the official Service Bus clients for .NET and Java reflect the abstract Service Bus message structure, which is mapped to and from the wire protocols Service Bus supports.
-A Service Bus message consists of a binary payload section that Service Bus never handles in any form on the service-side, and two sets of properties. The *broker properties* are predefined by the system. These predefined properties either control message-level functionality inside the broker, or they map to common and standardized metadata items. The *user properties* are a collection of key-value pairs that can be defined and set by the application.
+A Service Bus message consists of a binary payload section that Service Bus never handles in any form on the service-side, and two sets of properties. The **broker properties** are predefined by the system. These predefined properties either control message-level functionality inside the broker, or they map to common and standardized metadata items. The **user properties** are a collection of key-value pairs that can be defined and set by the application.
The predefined broker properties are listed in the following table. The names are used with all official client APIs and also in the [BrokerProperties](/rest/api/servicebus/introduction) JSON object of the HTTP protocol mapping. The equivalent names used at the AMQP protocol level are listed in parentheses.
-While the below names use pascal casing, note that JavaScript and Python clients would use camel and snake casing respectively.
+While the following names use pascal casing, note that JavaScript and Python clients would use camel and snake casing respectively.
| Property Name | Description | ||-|
While the below names use pascal casing, note that JavaScript and Python clients
| `EnqueuedTimeUtc` | The UTC instant at which the message has been accepted and stored in the entity. This value can be used as an authoritative and neutral arrival time indicator when the receiver doesn't want to trust the sender's clock. This property is read-only. | | `ExpiresΓÇïAtUtc` (absolute-expiry-time) | The UTC instant at which the message is marked for removal and no longer available for retrieval from the entity because of its expiration. Expiry is controlled by the **TimeToLive** property and this property is computed from EnqueuedTimeUtc+TimeToLive. This property is read-only. | | `Label` or `Subject` (subject) | This property enables the application to indicate the purpose of the message to the receiver in a standardized fashion, similar to an email subject line. |
-| `LockedΓÇïUntilΓÇïUtc` | For messages retrieved under a lock (peek-lock receive mode, not pre-settled) this property reflects the UTC instant until which the message is held locked in the queue/subscription. When the lock expires, the [DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deliverycount) is incremented and the message is again available for retrieval. This property is read-only. |
+| `LockedΓÇïUntilΓÇïUtc` | For messages retrieved under a lock (peek-lock receive mode, not presettled) this property reflects the UTC instant until which the message is held locked in the queue/subscription. When the lock expires, the [DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deliverycount) is incremented and the message is again available for retrieval. This property is read-only. |
| `LockΓÇïToken` | The lock token is a reference to the lock that is being held by the broker in *peek-lock* receive mode. The token can be used to pin the lock permanently through the [Deferral](message-deferral.md) API and, with that, take the message out of the regular delivery state flow. This property is read-only. | | `MessageΓÇïId` (message-id) | The message identifier is an application-defined value that uniquely identifies the message and its payload. The identifier is a free-form string and can reflect a GUID or an identifier derived from the application context. If enabled, the [duplicate detection](duplicate-detection.md) feature identifies and removes second and further submissions of messages with the same **MessageId**. | | `PartitionΓÇïKey` | For [partitioned entities](service-bus-partitioning.md), setting this value enables assigning related messages to the same internal partition, so that submission sequence order is correctly recorded. The partition is chosen by a hash function over this value and can't be chosen directly. For session-aware entities, the **SessionId** property overrides this value. |
The abstract message model enables a message to be posted to a queue via HTTPS a
A subset of the broker properties described previously, specifically `To`, `ReplyTo`, `ReplyToSessionId`, `MessageId`, `CorrelationId`, and `SessionId`, are used to help applications route messages to particular destinations. To illustrate this feature, consider a few patterns: -- **Simple request/reply**: A publisher sends a message into a queue and expects a reply from the message consumer. To receive the reply, the publisher owns a queue into which it expects replies to be delivered. The address of that queue is expressed in the **ReplyTo** property of the outbound message. When the consumer responds, it copies the **MessageId** of the handled message into the **CorrelationId** property of the reply message and delivers the message to the destination indicated by the **ReplyTo** property. One message can yield multiple replies, depending on the application context.
+- **Simple request/reply**: A publisher sends a message into a queue and expects a reply from the message consumer. To receive the reply, the publisher owns a queue into which it expects replies to be delivered. The address of the queue is expressed in the **ReplyTo** property of the outbound message. When the consumer responds, it copies the **MessageId** of the handled message into the **CorrelationId** property of the reply message and delivers the message to the destination indicated by the **ReplyTo** property. One message can yield multiple replies, depending on the application context.
- **Multicast request/reply**: As a variation of the prior pattern, a publisher sends the message into a topic and multiple subscribers become eligible to consume the message. Each of the subscribers might respond in the fashion described previously. This pattern is used in discovery or roll-call scenarios and the respondent typically identifies itself with a user property or inside the payload. If **ReplyTo** points to a topic, such a set of discovery responses can be distributed to an audience. - **Multiplexing**: This session feature enables multiplexing of streams of related messages through a single queue or subscription such that each session (or group) of related messages, identified by matching **SessionId** values, are routed to a specific receiver while the receiver holds the session under lock. Read more about the details of sessions [here](message-sessions.md). - **Multiplexed request/reply**: This session feature enables multiplexed replies, allowing several publishers to share a reply queue. By setting **ReplyToSessionId**, the publisher can instruct the consumer(s) to copy that value into the **SessionId** property of the reply message. The publishing queue or topic doesn't need to be session-aware. As the message is sent, the publisher can then specifically wait for a session with the given **SessionId** to materialize on the queue by conditionally accepting a session receiver.
Unlike the Java or .NET Standard variants, the .NET Framework version of the Ser
When you use the legacy SBMP protocol, those objects are then serialized with the default binary serializer, or with a serializer that is externally supplied. The object is serialized into an AMQP object. The receiver can retrieve those objects with the [GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method, supplying the expected type. With AMQP, the objects are serialized into an AMQP graph of `ArrayList` and `IDictionary<string,object>` objects, and any AMQP client can decode them.
-While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it's tied to the AMQP messaging ecosystem and HTTP clients will have trouble decoding such payloads.
+While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it's tied to the AMQP messaging ecosystem, and HTTP clients will have trouble decoding such payloads.
The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control.
-If the payload of a message can't be deserialized, then it is recommended to [dead-letter the message](./service-bus-dead-letter-queues.md?source=recommendations#application-level-dead-lettering).
+If the payload of a message can't be deserialized, then it's recommended to [dead-letter the message](./service-bus-dead-letter-queues.md?source=recommendations#application-level-dead-lettering).
## Next steps
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas
Germany | Germany Central, Germany Northeast China | China East, China North, China North2, China East2 Brazil | Brazil South
-Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers, UAE Central for UAE North customers.<br/><br/> To use restricted regions as your primary or recovery region, get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process) for both source and target subscriptions.
+Restricted Regions reserved for in-country/region disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers, UAE Central for UAE North customers.<br/><br/> To use restricted regions as your primary or recovery region, get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process) for both source and target subscriptions.
>[!NOTE] >
spring-apps How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-application-insights.md
az spring app-insights update \
This section applies to the Enterprise plan only, and provides instructions that that supplement the previous section.
-The Azure Spring Apps Enterprise plan uses buildpack bindings to integrate [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) with the type `ApplicationInsights`. For more information, see [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-intergration-and-ca-certificates.md).
+The Azure Spring Apps Enterprise plan uses buildpack bindings to integrate [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) with the type `ApplicationInsights`. For more information, see [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-integration-and-ca-certificates.md).
To create an Application Insights buildpack binding, use the following command:
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
Previously updated : 09/23/2022 Last updated : 05/25/2023
VMware Tanzu Build Service automates container creation, management, and governa
## Buildpacks
-VMware Tanzu Buildpacks provide framework and runtime support for applications. Buildpacks typically examine your applications to determine what dependencies to download and how to configure the apps to communicate with bound services.
+VMware Tanzu Buildpacks provide framework and runtime support for applications. Buildpacks typically examine your applications to determine what dependencies to download and how to configure applications to communicate with bound services.
-The [language family buildpacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-https://docsupdatetracker.net/index.html) are [composite buildpacks](https://paketo.io/docs/concepts/buildpacks/#composite-buildpacks) that provide easy out-of-the-box support for the most popular language runtimes and app configurations. These buildpacks combine multiple component buildpacks into ordered groupings. The groupings satisfy each buildpackΓÇÖs requirements.
+The [language family buildpacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-https://docsupdatetracker.net/index.html) are [composite buildpacks](https://paketo.io/docs/concepts/buildpacks/#composite-buildpacks) that provide easy out-of-the-box support for the most popular language runtimes and app configurations. These buildpacks combine multiple component buildpacks into ordered groupings. The groupings satisfy each buildpack's requirements.
## Builders
A [Builder](https://docs.vmware.com/en/Tanzu-Build-Service/1.6/vmware-tanzu-buil
Tanzu Build Service in the Enterprise plan is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps.
-The following table shows the build agent pool scale set sizes available:
+The following table shows the sizes available for build agent pool scale sets:
-| Scale Set | CPU/Gi |
+| Scale set | CPU/Gi |
|--|--| | S1 | 2 vCPU, 4 Gi | | S2 | 3 vCPU, 6 Gi |
The following table shows the build agent pool scale set sizes available:
| S8 | 32 vCPU, 64 Gi | | S9 | 64 vCPU, 128 Gi |
-Tanzu Build Service allows at most one pool-sized build task to build and twice the pool-sized build tasks to queue. If the quota of the agent pool is insufficient for the build task, the request for this build will get the following error: `The usage of build results in Building or Queuing status are (cpu: xxx, memory: xxxMi) and the remained quota is insufficient for this build. please retry with smaller size of build resourceRequests, retry after the previous build process completed or increased your build agent pool size`.
+Tanzu Build Service allows at most one pool-sized build task to build and twice the pool-sized build tasks to queue. If the quota of the agent pool is insufficient for the build task, the request for this build gets the following error: `The usage of build results in Building or Queuing status are (cpu: xxx, memory: xxxMi) and the remained quota is insufficient for this build. please retry with smaller size of build resourceRequests, retry after the previous build process completed or increased your build agent pool size`.
## Configure the build agent pool
-When you create a new Azure Spring Apps service instance using the Azure portal, you can use the **VMware Tanzu settings** tab to configure the number of resources given to the build agent pool.
+When you create a new Azure Spring Apps Enterprise service instance using the Azure portal, you can use the **VMware Tanzu settings** tab to configure the number of resources given to the build agent pool.
:::image type="content" source="media/how-to-enterprise-build-service/agent-pool.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page with V M ware Tanzu settings highlighted and Allocated Resources dropdown showing." lightbox="media/how-to-enterprise-build-service/agent-pool.png":::
-The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size here after the service instance is created.
+The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size here after you've created the service instance.
:::image type="content" source="media/how-to-enterprise-build-service/agent-pool-size.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Build Service page with 'General info' highlighted." lightbox="media/how-to-enterprise-build-service/agent-pool-size.png":::
-## Use the default builder to deploy an app
+## Build service on demand
-In the Enterprise plan, the `default` builder includes all the language family buildpacks supported in Azure Spring Apps so you can use it to build polyglot apps.
+You can enable or disable the build service when you create an Azure Spring Apps Enterprise plan instance.
-The `default` builder is read only, so you can't edit or delete it. When you deploy an app, if you don't specify the builder, the `default` builder will be used, making the following two commands equivalent.
+### Build and deployment characteristics
-```azurecli
-az spring app deploy \
- --name <app-name> \
- --artifact-path <path-to-your-JAR-file>
-```
+By default, Tanzu Build Service is enabled so that you can use a container registry. If you disable the build service, you can deploy an application only with a custom container image. You have the following options:
-```azurecli
-az spring app deploy \
- --name <app-name> \
- --artifact-path <path-to-your-JAR-file> \
- --builder default
-```
+- Enable the build service and use the Azure Spring Apps managed container registry.
-For more information about deploying a polyglot app, see [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md).
+ Azure Spring Apps provides a managed Azure Container Registry to store built images for your applications. You can execute build and deployment together only as one command, but not separately. You can use the built container images to deploy applications in the same service instance only. The images aren't accessible by other Azure Spring Apps Enterprise service instances.
-## Configure APM integration and CA certificates
+- Enable the build service and use your own container registry.
+
+ This scenario separates build from deployment. You can execute builds from an application's source code or artifacts to a container image separately from the application deployment. You can deploy the container images stored in your own container registry to multiple Azure Spring Apps Enterprise service instances.
+
+- Disable the build service.
+
+ When you disable the build service, you can deploy applications only with container images, which you can build from any Azure Spring Apps Enterprise service instance.
+
+### Configure build service settings
+
+You can configure Tanzu Build Service and container registry settings using the Azure portal or the Azure CLI.
+
+#### [Azure portal](#tab/azure-portal)
+
+Use the following steps to enable Tanzu Build Service when provisioning an Azure Spring Apps service instance:
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. On the **Basics** tab, select **Enterprise tier** in the **Pricing** section, and then specify the required information.
+1. Select **Next: VMware Tanzu settings**.
+1. On the **VMware Tanzu settings** tab, select **Enable Build Service**. For **Container registry**, the default setting is **Use a managed Azure Container Registry to store built images**.
+
+ :::image type="content" source="media/how-to-enterprise-build-service/enable-build-service-with-default-acr.png" alt-text="Screenshot of the Azure portal showing V M ware Tanzu Settings for the Azure Spring Apps Create page with default Build Service settings highlighted." lightbox="media/how-to-enterprise-build-service/enable-build-service-with-default-acr.png":::
+
+1. If you select **Use your own container registry to store built images (preview)** for **Container registry**, provide your container registry's server, username, and password.
+
+ :::image type="content" source="media/how-to-enterprise-build-service/enable-build-service-with-user-acr.png" alt-text="Screenshot of the Azure portal showing V M ware Tanzu Settings for the Azure Spring Apps Create page with use your own container registry highlighted." lightbox="media/how-to-enterprise-build-service/enable-build-service-with-user-acr.png":::
+
+1. If you disable **Enable Build Service**, the container registry options aren't provided but you can deploy applications with container images.
-By using Tanzu Partner Buildpacks and CA Certificates Buildpack, the Enterprise plan provides a simplified configuration experience to support application performance monitor (APM) integration and certificate authority (CA) certificates integration scenarios for polyglot apps. For more information, see [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-intergration-and-ca-certificates.md).
+ :::image type="content" source="media/how-to-enterprise-build-service/disable-build-service.png" alt-text="Screenshot of the Azure portal showing V M ware Tanzu Settings for the Azure Spring Apps Create page with the Enable Build Service not selected." lightbox="media/how-to-enterprise-build-service/disable-build-service.png":::
-## Manage custom builders
+1. Select **Review and create**.
-As an alternative to the `default` builder, you can create custom builders with the provided buildpacks.
+#### [Azure CLI](#tab/azure-cli)
-All the builders configured in an Azure Spring Apps service instance are listed in the **Build Service** section under **VMware Tanzu components**, as shown in the following screenshot:
+Use the following steps to enable Tanzu Build Service when provisioning an Azure Spring Apps service instance:
+1. Use the following commands to sign in to the Azure CLI, list available subscriptions, and set your active subscription:
-Select **Add** to create a new builder. The following screenshot shows the resources you should use to create the custom builder. The [OS Stack](https://docs.pivotal.io/tanzu-buildpacks/stacks.html) includes `Bionic Base`, `Bionic Full`, `Jammy Base`, and `Jammy Full`. Bionic is based on `Ubuntu 18.04 (Bionic Beaver)` and Jammy is based on `Ubuntu 22.04 (Jammy Jellyfish)`. For more information, see [Ubuntu Stacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-stacks.html#ubuntu-stacks) in the VMware documentation.
+ ```azurecli
+ az login
+ az account list --output table
+ az account set --subscription <subscription-id>
+ ```
+1. Use the following command to register the `Microsoft.Saas` namespace.
-You can also edit a custom builder when the builder isn't used in a deployment. You can update the buildpacks or the OS Stack, but the builder name is read only.
+ ```azurecli
+ az provider register --namespace Microsoft.SaaS
+ ```
+1. Use the following command to accept the legal terms and privacy statements for the Azure Spring Apps Enterprise plan. This step is necessary only if your subscription has never been used to create an Enterprise plan instance.
-You can delete any custom builder when the builder isn't used in a deployment.
+ ```azurecli
+ az term accept \
+ --plan asa-ent-hr-mtr \
+ --product azure-spring-cloud-vmware-tanzu-2 \
+ --publisher vmware-inc
+ ```
-## Build apps using a custom builder
+1. Select a location. The location must support the Azure Spring Apps Enterprise plan. For more information, see [Azure Spring Apps FAQ](faq.md).
-When you deploy an app, you can use the following command to build the app by specifying a specific builder:
+1. Use the following command to create a resource group:
-```azurecli
-az spring app deploy \
- --name <app-name> \
- --builder <builder-name> \
- --artifact-path <path-to-your-JAR-file>
-```
+ ```azurecli
+ az group create \
+ --name <resource-group-name> \
+ --location <location>
+ ```
-The builder is a resource that continuously contributes to your deployments. The builder provides the latest runtime images and latest buildpacks.
+ For more information about resource groups, see [What is Azure Resource Manager?](../azure-resource-manager/management/overview.md)
-You can't delete a builder when existing active deployments are built by the builder. To delete such a builder, save the configuration as a new builder first. After you deploy apps with the new builder, the deployments are linked to the new builder. You can then migrate the deployments under the previous builder to the new builder, and then delete the original builder.
+1. Prepare a name for your Azure Spring Apps service instance. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+
+1. Use one of the following commands to create an Azure Spring Apps service instance:
+
+ - Use the following command to create an Azure Spring Apps service instance with the build service enabled and using a managed Azure Container Registry. The build service is enabled by default.
+
+ ```azurecli
+ az spring create \
+ --resource-group <resource-group-name> \
+ --name <Azure-Spring-Apps-service-instance-name> \
+ --sku enterprise
+ ```
+
+ - Use the following command to create an Azure Spring Apps service instance with the build service enabled and using your own container registry. The build service is enabled by default.
+
+ ```azurecli
+ az spring create \
+ --resource-group <resource-group-name> \
+ --name <Azure-Spring-Apps-service-instance-name> \
+ --sku enterprise \
+ --registry-server <your-container-registry-login-server> \
+ --registry-username <your-container-registry-username> \
+ --registry-password <your-container-registry-password>
+ ```
+
+ - Use the following command to create an Azure Spring Apps service instance with the build service disabled.
+
+ ```azurecli
+ az spring create \
+ --resource-group <resource-group-name> \
+ --name <Azure-Spring-Apps-service-instance-name> \
+ --sku enterprise \
+ --disable-build-service
+ ```
+++
+## Deploy polyglot applications
+
+You can deploy polyglot applications in an Azure Spring Apps Enterprise service instance with Tanzu Build Service either enabled or disabled. For more information, see [How to deploy polyglot apps in Azure Spring Apps Enterprise](how-to-enterprise-deploy-polyglot-apps.md).
+
+## Configure APM integration and CA certificates
-For more information about deploying a polyglot app, see [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md).
+By using Tanzu Partner Buildpacks and CA Certificates Buildpack, the Azure Spring Apps Enterprise plan provides a simplified configuration experience to support application performance monitor (APM) integration. This integration includes certificate authority (CA) certificates integration scenarios for polyglot applications. For more information, see [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-integration-and-ca-certificates.md).
## Real-time build logs
-A build task will be triggered when an app is deployed from an Azure CLI command. Build logs are streamed in real time as part of the CLI command output. For information on using build logs to diagnose problems, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
+A build task is triggered when an application is deployed from an Azure CLI command. Build logs are streamed in real time as part of the CLI command output. For information about using build logs to diagnose problems, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
## Next steps -- [Azure Spring Apps](index.yml)
+- [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-integration-and-ca-certificates.md)
spring-apps How To Enterprise Configure Apm Integration And Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-integration-and-ca-certificates.md
+
+ Title: How to configure APM integration and CA certificates
+
+description: Shows you how to configure APM integration and CA certificates in the Azure Spring Apps Enterprise plan.
++++ Last updated : 05/25/2023+++
+# How to configure APM integration and CA certificates
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard ✔️ Enterprise
+
+This article shows you how to configure application performance monitor (APM) integration and certificate authority (CA) certificates in the Azure Spring Apps Enterprise plan.
+
+You can enable or disable Tanzu Build Service on an Azure Springs Apps Enterprise plan instance. For more information, see the [Build service on demand](how-to-enterprise-build-service.md#build-service-on-demand) section of [Use Tanzu Build Service](how-to-enterprise-build-service.md).
+
+## Prerequisites
+
+- An already provisioned Azure Spring Apps Enterprise plan instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md).
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`
+
+## Supported scenarios - APM and CA certificates integration
+
+Tanzu Build Service is enabled by default in Azure Spring Apps Enterprise. If you choose to disable the build service, you can deploy applications but only by using a custom container image. This section provides guidance for both enabled and disabled scenarios.
+
+### [Build service enabled](#tab/enable-build-service)
+
+Tanzu Build Service uses buildpack binding to integrate with [Tanzu Partner Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html) and other cloud native buildpacks such as the [ca-certificates](https://github.com/paketo-buildpacks/ca-certificates) buildpack on GitHub.
+
+Currently, Azure Spring Apps supports the following APM types and CA certificates:
+
+- ApplicationInsights
+- Dynatrace
+- AppDynamics
+- New Relic
+- ElasticAPM
+
+Azure Spring Apps supports CA certificates for all language family buildpacks, but not all supported APMs. The following table shows the binding types supported by Tanzu language family buildpacks.
+
+| Buildpack | ApplicationInsights | New Relic | AppDynamics | Dynatrace | ElasticAPM |
+|||--|-|--||
+| Java | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Dotnet | | | | Γ£ö | |
+| Go | | | | Γ£ö | |
+| Python | | | | | |
+| NodeJS | | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Web servers | | | | Γ£ö | |
+
+For information about using Web servers, see [Deploy web static files](how-to-enterprise-deploy-static-file.md).
+
+When you enable the build service, the APM and CA Certificate are integrated with a builder, as described in the [Manage APM integration and CA certificates in Azure Spring Apps](#manage-apm-integration-and-ca-certificates-in-azure-spring-apps) section.
+
+When the build service uses the Azure Spring Apps managed container registry, you can build an application to an image and then deploy it, but only within the current Azure Spring Apps service instance.
+
+Use the following command to integrate APM and CA certificates into your deployments:
+
+```azurecli
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+If you provide your own container registry to use with the build service, you can build an application into a container image and deploy the image to the current or other Azure Spring Apps Enterprise service instances.
+
+Providing your own container registry separates building from deployment. You can use the build command to create or update a build with a builder, then use the deploy command to deploy the container image to the service. In this scenario, you need to specify the APM-required environment variables on deployment.
+
+Use the following command to build an image:
+
+```azurecli
+az spring build-service build <create|update> \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+Use the following command to deploy with a container image, using the `--env` parameter to configure runtime environment:
+
+```azurecli
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --container-image <your-container-image> \
+ --container-registry <your-container-registry> \
+ --registry-password <your-password> \
+ --registry-username <your-username> \
+ --env NEW_RELIC_APP_NAME=<app-name> \
+ NEW_RELIC_LICENSE_KEY=<your-license-key>
+```
+
+#### Supported APM resources with the build service enabled
+
+This section lists the supported languages and required environment variables for the APMs that you can use for your integrations.
+
+- **Application Insights**
+
+ Supported languages:
+ - Java
+
+ Environment variables required for buildpack binding:
+ - `connection-string`
+
+ Environment variables required for deploying an app with a custom image:
+ - `APPLICATIONINSIGHTS_CONNECTION_STRING`
+
+ > [!NOTE]
+ > Upper-case keys are allowed, and you can replace underscores (`_`) with hyphens (`-`).
+
+ For other supported environment variables, see [Application Insights Overview](../azure-monitor/app/app-insights-overview.md?tabs=java).
+
+- **DynaTrace**
+
+ Supported languages:
+ - Java
+ - .NET
+ - Go
+ - Node.js
+ - WebServers
+
+ Environment variables required for buildpack binding:
+ - `api-url` or `environment-id` (used in build step)
+ - `api-token` (used in build step)
+ - `TENANT`
+ - `TENANTTOKEN`
+ - `CONNECTION_POINT`
+
+ Environment variables required for deploying an app with a custom image:
+ - `DT_TENANT`
+ - `DT_TENANTTOKEN`
+ - `DT_CONNECTION_POINT`
+
+ For other supported environment variables, see [Dynatrace](https://www.dynatrace.com/support/help/shortlink/azure-spring#envvar)
+
+- **New Relic**
+
+ Supported languages:
+ - Java
+ - Node.js
+
+ Environment variables required for buildpack binding:
+ - `license_key`
+ - `app_name`
+
+ Environment variables required for deploying an app with a custom image:
+ - `NEW_RELIC_LICENSE_KEY`
+ - `NEW_RELIC_APP_NAME`
+
+ For other supported environment variables, see [New Relic](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables)
+
+- **Elastic**
+
+ Supported languages:
+ - Java
+ - Node.js
+
+ Environment variables required for buildpack binding:
+ - `service_name`
+ - `application_packages`
+ - `server_url`
+
+ Environment variables required for deploying an app with a custom image:
+ - `ELASTIC_APM_SERVICE_NAME`
+ - `ELASTIC_APM_APPLICATION_PACKAGES`
+ - `ELASTIC_APM_SERVER_URL`
+
+ For other supported environment variables, see [Elastic](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html)
+
+- **AppDynamics**
+
+ Supported languages:
+ - Java
+ - Node.js
+
+ Environment variables required for buildpack binding:
+ - `agent_application_name`
+ - `agent_tier_name`
+ - `agent_node_name`
+ - `agent_account_name`
+ - `agent_account_access_key`
+ - `controller_host_name`
+ - `controller_ssl_enabled`
+ - `controller_port`
+
+ Environment variables required for deploying an app with a custom image:
+ - `APPDYNAMICS_AGENT_APPLICATION_NAME`
+ - `APPDYNAMICS_AGENT_TIER_NAME`
+ - `APPDYNAMICS_AGENT_NODE_NAME`
+ - `APPDYNAMICS_AGENT_ACCOUNT_NAME`
+ - `APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY`
+ - `APPDYNAMICS_CONTROLLER_HOST_NAME`
+ - `APPDYNAMICS_CONTROLLER_SSL_ENABLED`
+ - `APPDYNAMICS_CONTROLLER_PORT`
+
+ For other supported environment variables, see [AppDynamics](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties)
+
+## Use CA certificates
+
+CA certificates use the [ca-certificates](https://github.com/paketo-buildpacks/ca-certificates) buildpack to support providing CA certificates to the system trust store at build and runtime.
+
+In the Azure Spring Apps Enterprise plan, the CA certificates use the **Public Key Certificates** tab on the **TLS/SSL settings** page in the Azure portal, as shown in the following screenshot:
++
+You can configure the CA certificates on the **Edit binding** page. The `succeeded` certificates are shown in the **CA Certificates** list.
++
+### [Build service disabled](#tab/disable-build-service)
+
+If you disable the build service, you can only deploy an application with a container image. For more information, see [Deploy an application with a custom container image](how-to-deploy-with-custom-container-image.md).
+
+You can use multiple instances of Azure Spring Apps Enterprise, where some instances build and deploy images and others only deploy images. Consider the following scenario:
+
+- For one instance, you enable the build service with a user container registry. Then you build from an artifact-file or source-code with APM or CA certificate into a container image and deploy to the current Azure Spring Apps instance or other service instances. For more information, see, the [Build and deploy polyglot applications](how-to-enterprise-deploy-polyglot-apps.md#build-and-deploy-polyglot-applications), section of [How to deploy polyglot apps in Azure Spring Apps Enterprise](How-to-enterprise-deploy-polyglot-apps.md).
+
+- In another instance with the build service disabled, you deploy an application with the container image in your registry and also make use of APM and CA certificates.
+
+Due to the deployment supporting only a custom container image, you must use the `--env` parameter to configure the runtime environment for deployment. The following command provides an example:
+
+```azurecli
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --container-image <your-container-image> \
+ --container-registry <your-container-registry> \
+ --registry-password <your-password> \
+ --registry-username <your-username> \
+ --env NEW_RELIC_APP_NAME=<app-name> NEW_RELIC_LICENSE_KEY=<your-license-key>
+```
+
+### Supported APM resources with the build service disabled
+
+This section lists the supported languages and required environment variables for the APMs that you can use for your integrations.
+
+- **Application Insights**
+
+ Supported languages:
+ - Java
+
+ Required runtime environment variables:
+ - `APPLICATIONINSIGHTS_CONNECTION_STRING`
+
+ For other supported environment variables, see [Application Insights Overview](../azure-monitor/app/app-insights-overview.md?tabs=java)
+
+- **Dynatrace**
+
+ Supported languages:
+ - Java
+ - .NET
+ - Go
+ - Node.js
+ - WebServers
+
+ Required runtime environment variables:
+
+ - `DT_TENANT`
+ - `DT_TENANTTOKEN`
+ - `DT_CONNECTION_POINT`
+
+ For other supported environment variables, see [Dynatrace](https://www.dynatrace.com/support/help/shortlink/azure-spring#envvar)
+
+- **New Relic**
+
+ Supported languages:
+ - Java
+ - Node.js
+
+ Required runtime environment variables:
+ - `NEW_RELIC_LICENSE_KEY`
+ - `NEW_RELIC_APP_NAME`
+
+ For other supported environment variables, see [New Relic](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables)
+
+- **ElasticAPM**
+
+ Supported languages:
+ - Java
+ - Node.js
+
+ Required runtime environment variables:
+ - `ELASTIC_APM_SERVICE_NAME`
+ - `ELASTIC_APM_APPLICATION_PACKAGES`
+ - `ELASTIC_APM_SERVER_URL`
+
+ For other supported environment variables, see [Elastic](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html)
+
+- **AppDynamics**
+
+ Supported languages:
+ - Java
+ - Node.js
+
+ Required runtime environment variables:
+ - `APPDYNAMICS_AGENT_APPLICATION_NAME`
+ - `APPDYNAMICS_AGENT_TIER_NAME`
+ - `APPDYNAMICS_AGENT_NODE_NAME`
+ - `APPDYNAMICS_AGENT_ACCOUNT_NAME`
+ - `APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY`
+ - `APPDYNAMICS_CONTROLLER_HOST_NAME`
+ - `APPDYNAMICS_CONTROLLER_SSL_ENABLED`
+ - `APPDYNAMICS_CONTROLLER_PORT`
+
+ For other supported environment variables, see [AppDynamics](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties)
+++
+## Manage APM integration and CA certificates in Azure Spring Apps
+
+This section applies only to an Azure Spring Apps Enterprise service instance with the build service enabled. With the build service enabled, one buildpack binding means either credential configuration against one APM type, or CA certificates configuration against the CA certificates type. For APM integration, follow the earlier instructions to configure the necessary environment variables or secrets for your APM.
+
+> [!NOTE]
+> When configuring environment variables for APM bindings, use key names without a prefix. For example, do not use a `DT_` prefix for a Dynatrace binding or `APPLICATIONINSIGHTS_` for Application Insights. Tanzu APM buildpacks will transform the key name to the original environment variable name with a prefix.
+
+You can manage buildpack bindings with the Azure portal or the Azure CLI.
+
+### [Azure portal](#tab/azure-portal)
+
+Use the following steps to view the buildpack bindings:
+
+1. In the Azure portal, go to your Azure Spring Apps Enterprise service instance.
+1. In the navigation pane, select **Build Service**.
+1. Select **Edit** under the **Bindings** column to view the bindings configured for a builder.
+
+ :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/edit-binding.png" alt-text="Screenshot of Azure portal showing the Build Service page with the Bindings Edit link highlighted for a selected builder." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/edit-binding.png":::
+
+1. Review the bindings on the **Edit binding for default builder** page.
+
+ :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/show-service-binding.png" alt-text="Screenshot of Azure portal showing the Edit bindings for default builder page with the binding types and their status listed.":::
+
+### Create a buildpack binding
+
+To create a buildpack binding, select **Unbound** on the **Edit Bindings** page, specify the binding properties, and then select **Save**.
+
+### Unbind a buildpack binding
+
+You can unbind a buildpack binding by using the **Unbind binding** command, or by editing the binding properties.
+
+To use the **Unbind binding** command, select the **Bound** hyperlink, and then select **Unbind binding**.
++
+To unbind a buildpack binding by editing binding properties, select **Edit Binding**, and then select **Unbind**.
++
+When you unbind a binding, the bind status changes from **Bound** to **Unbound**.
+
+### [Azure CLI](#tab/azure-cli)
+
+### View buildpack bindings using the Azure CLI
+
+View the current buildpack bindings by using the following command:
+
+```azurecli
+az spring build-service builder buildpack-binding list \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --builder-name <your-builder-name>
+```
+
+### Create a binding
+
+Use this command to change the binding from *Unbound* to *Bound* status:
+
+```azurecli
+az spring build-service builder buildpack-binding create \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-buildpack-binding-name> \
+ --builder-name <your-builder-name> \
+ --type <your-binding-type> \
+ --properties a=b c=d \
+ --secrets e=f g=h
+```
+
+For information on the `properties` and `secrets` parameters for your buildpack, see the [Supported Scenarios - APM and CA Certificates Integration](#supported-scenariosapm-and-ca-certificates-integration) section.
+
+### Show the details for a specific binding
+
+You can view the details of a specific binding by using the following command:
+
+```azurecli
+az spring build-service builder buildpack-binding show \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-buildpack-binding-name> \
+ --builder-name <your-builder-name>
+```
+
+### Edit the properties of a binding
+
+You can change a binding's properties by using the following command:
+
+```azurecli
+az spring build-service builder buildpack-binding set \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-buildpack-binding-name> \
+ --builder-name <your-builder-name> \
+ --type <your-binding-type> \
+ --properties a=b c=d \
+ --secrets e=f2 g=h
+```
+
+For more information on the `properties` and `secrets` parameters for your buildpack, see the [Supported Scenarios - APM and CA Certificates Integration](#supported-scenariosapm-and-ca-certificates-integration) section.
+
+### Delete a binding
+
+Use the following command to change the binding status from *Bound* to *Unbound*.
+
+```azurecli
+az spring build-service builder buildpack-binding delete \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-buildpack-binding-name> \
+ --builder-name <your-builder-name>
+```
+++
+## Next steps
+
+- [How to deploy polyglot apps in Azure Spring Apps Enterprise](how-to-enterprise-deploy-polyglot-apps.md)
spring-apps How To Enterprise Configure Apm Intergration And Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-intergration-and-ca-certificates.md
- Title: How to configure APM integration and CA certificates-
-description: How to configure APM integration and CA certificates
---- Previously updated : 01/13/2023---
-# How to configure APM integration and CA certificates
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ❌ Basic/Standard ✔️ Enterprise
-
-This article